Next Article in Journal
Exergoeconomic Assessment of Solar Absorption and Absorption–Compression Hybrid Refrigeration in Building Cooling
Previous Article in Journal
On Points Focusing Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras

by
Dagmar Markechová
1,*,
Batool Mosapour
2 and
Abolfazl Ebrahimzadeh
3
1
Department of Mathematics, Faculty of Natural Sciences, Constantine the Philosopher University in Nitra, A. Hlinku 1, SK-949 01 Nitra, Slovakia
2
Department of Mathematics, Farhangian University, 7616914111 Kerman, Iran
3
Young Researchers and Elite Club, Zahedan Branch, Islamic Azad University, 9816883673 Zahedan, Iran
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(2), 129; https://doi.org/10.3390/e20020129
Submission received: 25 January 2018 / Revised: 6 February 2018 / Accepted: 9 February 2018 / Published: 16 February 2018
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In the paper we propose, using the logical entropy function, a new kind of entropy in product MV-algebras, namely the logical entropy and its conditional version. Fundamental characteristics of these quantities have been shown and subsequently, the results regarding the logical entropy have been used to define the logical mutual information of experiments in the studied case. In addition, we define the logical cross entropy and logical divergence for the examined situation and prove basic properties of the suggested quantities. To illustrate the results, we provide several numerical examples.

1. Introduction

In all areas of empirical research, it is very important to know how much information we gain by the realization of experiments. As it is known, the measure of information is entropy, the standard approach being based on Shannon entropy [1]. The standard mathematical model of an experiment in information theory [2] is a measurable partition of a probability space. Let us remind that a measurable partition of a probability space ( X , S , P ) is a sequence A = { A 1 , , A n } of measurable subsets of X such that i = 1 n A i = X and A i A j =   whenever i j . The Shannon entropy of the measurable partition A = { A 1 , , A n } with probabilities p i = P ( A i ) , i = 1 , , n , of the corresponding elements, is the number h S ( A ) = i = 1 n S ( p i ) , where S : [ 0 , 1 ] is the Shannon entropy function defined by the formula:
S ( x ) = { x log x , if        x > 0 ; 0 , if        x = 0 .
In classical theory, partitions are defined within the Cantor set theory. However, it has turned out that, in many cases, the partitions defined in the context of fuzzy set theory [3] are more suitable for solving real problems. Hence, numerous suggestions have been put forward to generalize the classical partitions to fuzzy partitions [4,5,6,7,8,9,10]. Fuzzy partitions provide a mathematical model of random experiments the outcomes of which are unclear, inaccurately defined events. The Shannon entropy of fuzzy partitions has been studied by many authors; we refer the reader to, e.g., [11,12,13,14,15,16,17,18,19,20,21].
The notion of an MV-algebra, originally proposed by Chang in [22] in order to give an algebraic counterpart of the Łukasiewicz many-valued logic [23] (MV = many valued), generalizes some classes of fuzzy sets. MV-algebras have been investigated by numerous international research groups [24,25,26,27,28]. A Shannon entropy theory for MV-algebras was created in [29,30]. The fuzzy set theory is a rapidly evolving field of theoretical and applied mathematical research. At present the subjects of intensive study are also other algebraic structures based on the fuzzy set theory, such as D-posets [31,32,33], effect algebras [34], and A-posets [35,36]. Some results concerning Shannon’s entropy on these structures have been provided, e.g., in [37,38,39].
An important case of MV-algebras is the so-called product MV-algebra (see, e.g., [40,41,42,43,44,45]). This notion was proposed independently by two authors: Riečan [40] and Montagna [41]. A Shannon entropy theory for product MV-algebras was provided in [30,46,47]. We note that in the recently published paper [48], the results regarding the Shannon entropy of partitions in product MV-algebras were exploited to define the notions of Kullback-Leibler divergence and mutual information of partitions in product MV-algebras. The Kullback-Leibler divergence (often shortened to K-L divergence) was proposed in [49] as the distance between two probability distributions and it is currently one of the most basic quantities in information theory.
When addressing some special issues instead of Shannon entropy, it is preferable to use an approach based on the conception of logical entropy [50,51,52,53,54,55,56]. If A = { A 1 , , A n } is a measurable partition with probabilities p 1 , , p n of the corresponding elements, then the logical entropy of A is defined by the formula h l ( A ) = i = 1 n l ( p i ) ,    where l : [ 0 , 1 ] [ 0 , 1 ] is the logical entropy function defined by:
l ( x ) = x ( 1 x ) .
In [50], the author gives a history of the logical entropy formula h l ( A ) = 1 i = 1 n p i 2 . It is interesting that Alan Turing, who worked during the Second World War at the Bletchley Park facility in England, used the formula i = 1 n p i 2 in his famous cryptanalysis work. This formula was independently used by Polish crypto-analysts in their work [57] on the Enigma. The relationship between the Shannon entropy and the logical entropy is examined in [50]. In addition, the notions of logical cross entropy and logical divergence have been proposed in the cited paper. For some recent works related to the concept of logical entropy on algebraic structures based on fuzzy set theory, we refer the reader to (for example) [58,59,60,61,62,63,64,65].
The purpose of this article is to extend the study of logical entropy provided in [50] to the case of product MV-algebras. The remainder of the article is structured as follows. In Section 2 we present basic concepts, terminology and the known results that are used in the article. The results of the paper are given in the succeeding three sections. In Section 3, we define the logical entropy of partitions in product MV-algebras and its conditional version and examine their properties. In the following section, the results of Section 3 are exploited to define the concept of logical mutual information for the studied situation. Using the notion of logical conditional mutual information, we present chain rules for logical mutual information in product MV-algebras. In Section 5, we define the logical cross entropy and the logical divergence of states defined on product MV-algebras and we examine properties of these quantities. The results are explained with several examples to illustrate the theory developed in the article. The final section contains a brief overview. It is shown that by replacing the Shannon entropy function (Equation (1)) by the logical entropy function (Equation (2)) we obtain the results analogous to the results given in [48].

2. Preliminaries

The aim of the section is to provide basic concepts, terminology and the known results used in the paper.
Definition 1
[25]. An MV-algebra is an algebraic structure M = ( M , , , , 0 , 1 ) , where is a commutative and associative binary operation on M , is a binary operation on M , is a unary operation on M , 0 , 1 M , such that a 0 = a ; a 1 = 1 ; ( a ) = a ; 0 = 1 ; a a = 1 ; ( a b ) b = ( a b ) a ; a b = ( a b ) .
Example 1.
Let M be the unit real interval [ 0 ,   1 ] , a = 1 a , a b = min ( a + b , 1 ) , a b = max ( a + b 1 , 0 ) . Then the system ( M , , , , 0 , 1 ) is an MV-algebra.
Example 2.
Let ( L , + , ) be a commutative lattice ordered group (shortly l-group), i.e., ( L , + ) is a commutative group, ( L , ) is a partially ordered set being a lattice and a b a + c b + c . Let u L be a strong unit of L (i.e., to each a L there exists a positive integer n satisfying the condition a n u ) such that u > 0 , where 0 is a neutral element of ( L , + ) . Put [ 0 , u ] = { a L ;    0 a u } , a = u a , a b = ( a + b ) u , a b = ( a + b u ) 0 , 1 = u . Then the system M 0 ( L , u ) = ( [ 0 , u ] , , , , 0 , 1 ) is an MV-algebra. Evidently, if a , b L such that a + b u , then a b = a + b Moreover, it can be seen that the condition a b = 0 is equivalent to the condition that a + b u .
By the following Mundici representation theorem, every MV-algebra M can be identified with the unit interval [0, u] of a unique (up to isomorphism) commutative lattice ordered group L with a strong unit u. We say that L is the l-group corresponding to M .
Theorem 1
[66]. Let M be an MV-algebra. Then there exists a commutative lattice ordered group L with a strong unit u such that M = M 0 ( L , u ) , and (L, u) is unique up to isomorphism.
Definition 2
[47]. Let M = ( M , , , , 0 , 1 ) be an MV-algebra. A partition in M is an n-tuple α = ( a 1 , , a n ) of elements of M with the property a 1 + + a n = u , where + is an addition in the l-group L corresponding to M and u is a strong unit of L .
In the paper we shall deal with product MV-algebras. The definition of product MV-algebra (cf. [40,41]), as well as the previous definition of partition in MV-algebra, is based on Mundici’s theorem, i.e., the MV-algebra operation in the following definition, and in what follows, is substituted by the group operation + in the commutative lattice ordered group L that corresponds to the considered MV-algebra M . Analogously, the element u is a strong unit of L and is the partial-ordering relation in L .
Definition 3
[40]. A product MV-algebra is an algebraic structure ( M , , , · , , 0 , 1 ) where ( M , , , , 0 , 1 ) is an MV-algebra and is a commutative and associative binary operation on M with the following properties:
(i)
for every a M , u a = a ;
(ii)
if a , b , c M such that a + b u , then c a + c b u , and c ( a + b ) = c a + c b .
For brevity, we will write ( M , ) instead of ( M , , , · , , 0 , 1 ) Further, we consider a state defined on ( M , ) which plays the role of a probability measure on M. We note that a relevant probability theory for the product MV-algebras was developed in [44], see also [27,45].
Definition 4
[44]. A state on a product MV-algebra ( M , ) is a map s : M [ 0 , 1 ] with the properties:
(i)
s ( u ) = 1 ;
(ii)
if a , b M such that a + b u , then s ( a + b ) = s ( a ) + s ( b ) .
Notice that the disjointness of the elements a , b M is expressed in the previous definition by the condition a + b u (or equivalently by a u b ). According to the Mundici theorem this condition can be formulated in the equivalent way as a + b M or also as a b = 0 . As is customary, we will write i = 1 n a i instead of a 1 + + a n . Let s : M [ 0 , 1 ] be a state. Applying induction we get that for any elements a 1 , , a n M such that i = 1 n a i u , it holds s ( i = 1 n a i ) = i = 1 n s ( a i ) .
In the system of all partitions of ( M , ) , we define the refinement partial order in a standard way (cf. [23]). If α = ( a 1 , , a n ) , and β = ( b 1 , , b m ) are two partitions of ( M , ) , then we write β α (and we say that β is a refinement of α ), if there exists a partition { I ( 1 ) , I ( 2 ) , , I ( n ) } of the set { 1 , 2 , , m } such that a i = j I ( i ) b j , for i = 1 , , n . Further, we define α β = ( a i b j ;     i = 1 , , n , j = 1 , 2 , , m ) . Since i = 1 n j = 1 m a i b j = ( i = 1 n a i ) ( j = 1 m b j ) = u u = u , the system α β is a partition of ( M , ) . The partition α β represents a combined experiment consisting of a realization of the considered experiments α and β . If α 1 , α 2 , , α n are partitions in a product MV-algebra ( M , ) , then we put i = 1 n α i = α 1 α 2 α n .

3. Logical Entropy of Partitions in Product MV-Algebras

In this section we define the logical entropy and the logical conditional entropy of partitions in a product MV-algebra and derive their properties.
Definition 5.
Let α = ( a 1 , , a n ) be a partition in a product MV-algebra ( M , ) , and s : M [ 0 , 1 ] be a state. Then we define the logical entropy of α with respect to state s by the formula:
h s l ( α ) = i = 1 n s ( a i ) ( 1 s ( a i ) ) .   
Remark 1.
Evidently, the logical entropy h s l ( α ) is always nonnegative, and it has the maximum value 1 1 n for the state s uniform over α = ( a 1 , , a n ) . Since i = 1 n s ( a i ) = s ( i = 1 n a i ) = s ( u ) = 1 , Equation (3) can also be written in the following form:
h s l ( α ) = 1 i = 1 n ( s ( a i ) ) 2 .
Example 3.
Let ( M , ) be a product MV-algebra and s : M [ 0 , 1 ] be a state. If we put ε = ( u ) , then ε is a partition of ( M , ) with the property α ε , for every partition α of ( M , ) . Its logical entropy is h s l ( ε ) = 0 . Let a M with s ( a ) = p , where p ( 0 , 1 ) . It is obvious that the pair α = ( a , u a ) is a partition of ( M , ) . Since s ( u a ) = 1 p , the logical entropy h s l ( α ) = 2 p ( 1 p ) . If we put p = 1 2 , then we have h s l ( α ) = 1 2 .
In the proofs we shall use the following propositions.
Proposition 1.
Let α = ( a 1 , , a n ) be a partition of ( M , ) . Then, for every b M , we have:
s ( b ) = i = 1 n s ( a i b ) .
Proof. 
According to Definitions 2, 3, and 4 we obtain:
s ( b ) = s ( u b ) = s ( ( i = 1 n a i ) b ) = s ( i = 1 n a i b ) = i = 1 n s ( a i b ) .
 ☐
Proposition 2.
For arbitrary partitions α , β of ( M , ) , it holds α β α .
Proof. 
Let us suppose that α = ( a 1 , , a n ) , β = ( b 1 , , b m ) . Put I ( i ) = { ( i , 1 ) , , ( i , m ) } , for i = 1 , , n . Since we have:
a i = a i u = a i ( j = 1 m b j )     = j = 1 m a i b j = ( l , j ) I ( i ) a l b j ,
for i = 1 , , n , we conclude that α β α .  ☐
Proposition 3.
Let α , β be partitions of ( M , ) such that β α . Then for an arbitrary partition γ , it holds β γ α γ .
Proof. 
Let α = ( a 1 , , a n ) , β = ( b 1 , , b m ) , γ = ( c 1 , , c r ) , β α . Then there is a partition { I ( 1 ) , I ( 2 ) , , I ( n ) } of the set { 1 , 2 , , m } such that a i = j I ( i ) b j , for i = 1 , , n . A partition α γ = ( a i c k ;    i = 1 , , n , k = 1 , , r ) is indexed by { ( i , k ) ;    i = 1 , , n , k = 1 , , r } , therefore we put I ( i , k ) = { ( j , k ) ;    j I ( i ) } , for i = 1 , , n , k = 1 , , r . We obtain:
a i c k = ( j I ( i ) b j ) c k = j I ( i ) b j c k = ( j , l ) I ( i , k ) b j c l ,
for i = 1 , , n , k = 1 , , r . This implies that β γ α γ .  ☐
Definition 6.
If α = ( a 1 , , a n ) and β = ( b 1 , , b m ) are partitions of ( M , ) , then the logical conditional entropy of α given β is defined by:
h s l ( α / β ) = i = 1 n j = 1 m s ( a i b j ) ( s ( b j ) s ( a i b j ) ) .
Remark 2.
Since by Proposition 1, for j = 1 , , m , it holds i = 1 n s ( a i b j ) =    s ( b j ) , Equation (5) can be written in the following equivalent form:
h s l ( α / β ) = j = 1 m ( s ( b j ) ) 2 i = 1 n j = 1 m ( s ( a i b j ) ) 2 .
Remark 3.
Since s ( a i b j ) s ( b j ) , for i = 1 , , n , j = 1 , , m , the logical conditional entropy h s l ( α / β ) is always nonnegative. Let us consider the partition ε = ( u ) . It can be easily verified that h s l ( α / ε ) = h s l ( α ) .
Theorem 2.
For arbitrary partitions α , β of ( M , ) , it holds:
h s l ( α β ) = h s l ( α ) + h s l ( β / α ) .
Proof. 
Let us suppose that α = ( a 1 , , a n ) , β = ( b 1 , , b m ) . Then by Equations (4) and (6) we obtain:
h s l ( α ) + h s l ( β / α ) = 1 i = 1 n ( s ( a i ) ) 2 +    i = 1 n ( s ( a i ) ) 2    i = 1 n j = 1 m ( s ( a i b j ) ) 2 = 1 i = 1 n j = 1 m ( s ( a i b j ) ) 2 = h s l ( α β ) .
 ☐
Remark 4.
Let α 1 , α 2 , , α n be partitions of ( M , ) . Using now Equation (7), considering the partition i = 1 n α i = α 1 α 2 α n and applying induction, we get:
h s l ( α 1 α n ) = h s l ( α 1 ) + i = 2 n h s l ( α i / α 1 α i 1 ) .
Theorem 3.
For arbitrary partitions α , β of ( M , ) , it holds:
(i)
h s l ( α / β ) h s l ( α ) ;
(ii)
h s l ( α β ) h s l ( α ) + h s l ( β ) .
Proof. 
Let us suppose that α = ( a 1 , , a n ) , β = ( b 1 , , b m ) .
(i)
Since by Proposition 1, for i = 1 , , n , it holds j = 1 m s ( a i b j ) = s ( a i ) , we obtain:
j = 1 m s ( a i b j ) ( s ( b j ) s ( a i b j ) ) ( j = 1 m s ( a i b j ) ) ( j = 1 m ( s ( b j ) s ( a i b j ) ) ) = s ( a i ) ( j = 1 m s ( b j ) j = 1 m s ( a i b j ) ) = s ( a i ) ( 1 s ( a i ) ) .
Therefore:
h s l ( α / β ) = i = 1 n j = 1 m s ( a i b j ) ( s ( b j ) s ( a i b j ) ) i = 1 n s ( a i ) ( 1 s ( a i ) ) = h s l ( α ) .   
(ii)
Combining Equation (7) and the previous property we obtain the claim (ii). ☐
In the following example, we illustrate the results of Theorem 3.
Example 4.
Let us consider the measurable space ( [ 0 , 1 ] , B ) , where B is the σ algebra of all Borel subsets of the unit interval [ 0 , 1 ] . Put M = { I A ;    A B } , where I A is the indicator of the set A , and define, for every I A , I B M , the operation by the equality I A    I B = I A B . The system ( M , ) is a product MV-algebra with the unit element u = I X . Let us define a state s :    M [ 0 , 1 ] by the equality s ( I A ) = 0 1 I A ( x ) d x , for any element I A of M . The pairs α = ( I [ 0 , 1 2 ] ,     I [ 1 2 , 1 ] ) and β = ( I [ 0 , 1 3 ] ,     I [ 1 3 , 1 ] ) are partitions of ( M , ) with the s-state values 1 2 , 1 2 and 1 3 , 2 3 of the corresponding elements, respectively. By Equation (4) we can easily calculate their logical entropy: h s l ( α ) = 1 2 , h s l ( β ) = 4 9 . The partition α β = ( I [ 0 , 1 3 ] , I [ 1 3 , 1 2 ] , I [ 1 2 , 1 ] , 0 ) has the s-state values 1 3 , 1 6 , 1 2 , 0 of the corresponding elements, and the logical entropy:
h s l ( α β ) = 1 [ ( 1 3 ) 2 + ( 1 6 ) 2 + ( 1 2 ) 2 ] = 11 18 .
Since 11 18 1 2 + 4 9 , the inequality h s l ( α β ) h s l ( α ) + h s l ( β ) holds. The logical conditional entropy of α given β is the number:
h s l ( α / β ) = j = 1 2 ( s ( b j ) ) 2 i = 1 2 j = 1 2 ( s ( a i b j ) ) 2 = ( 1 3 ) 2 + ( 2 3 ) 2 [ ( 1 3 ) 2 + ( 1 6 ) 2 + ( 1 2 ) 2 ] = 5 9 7 18 = 1 6 ;
analogously we get the logical conditional entropy h s l ( β / α ) = 1 2 7 18 = 1 9 . It can be verified that:
h s l ( α β ) = h s l ( β ) + h s l ( α / β ) = h s l ( α ) + h s l ( β / α ) .
Theorem 4.
For arbitrary partitions α , β , γ of ( M , ) , it holds:
h s l ( α β / γ ) = h s l ( α / γ ) + h s l ( β / α γ ) .
Proof. 
Let us suppose that α = ( a 1 , , a p ) , β = ( b 1 , , b q ) , γ = ( c 1 , , c r ) . Using Equation (6) we can write:
h s l ( α / γ ) + h s l ( β / α γ ) = k = 1 r ( s ( c k ) ) 2 i = 1 p k = 1 r ( s ( a i c k ) ) 2 + i = 1 p k = 1 r ( s ( a i c k ) ) 2 j = 1 q i = 1 p k = 1 r ( s ( b j a i c k ) ) 2 = k = 1 r ( s ( c k ) ) 2 i = 1 p j = 1 q k = 1 r ( s ( a i b j c k ) ) 2 = h s l ( α β / γ ) .
 ☐
Remark 5.
Let α 1 , α 2 , , α n , γ be partitions of ( M , ) . Using the principle of mathematical induction, we get the following generalization of Equation (9):
h s l ( α 1 α n / γ ) = h s l ( α 1 / γ ) + i = 2 n h s l ( α i / α 1 α i 1 γ ) .
If we put γ = ε , as a special case of Equation (10) we get Equation (8).
Theorem 5.
For arbitrary partitions α , β , γ of ( M , ) , it holds:
(i)
β α implies h s l ( β ) h s l ( α ) ;
(ii)
h s l ( α β ) max [ h s l ( α ) ;    h s l ( β ) ] ;
(iii)
β α implies h s l ( β / γ ) h s l ( α / γ ) .
Proof. 
(i)
Let α = ( a 1 , , a k ) , β = ( b 1 , , b l ) . By the assumption that β α there is a partition { I ( 1 ) , I ( 2 ) , , I ( k ) } of the set { 1 , 2 , , l } with the property a i = j I ( i ) b j , for i = 1 , , k . Therefore:
h s l ( α ) = 1 i = 1 k ( s ( a i ) ) 2 = 1 i = 1 k ( s ( j I ( i ) b j ) ) 2 = 1 i = 1 k ( j I ( i ) s ( b j ) ) 2 1 i = 1 k j I ( i ) ( s ( b j ) ) 2 = 1 j = 1 l ( s ( b j ) ) 2 = h s l ( β ) .
We used the inequality ( j I ( i ) s ( b j ) ) 2 j I ( i ) ( s ( b j ) ) 2 that follows from the inequality ( x 1 + + x n ) 2 x 1 2 + + x n 2 applicable for all nonnegative real numbers x 1 , , x n .
(ii)
According to Proposition 2 it holds α β α , and α β β , therefore, the property (ii) is a direct consequence of the property (i).
(iii)
Let β α . Then by Proposition 3 we have β γ α γ . Therefore, using Equation (7) and the property (i) we get:
h s l ( β / γ ) = h s l ( β γ ) h s l ( γ ) h s l ( α γ ) h s l ( γ ) = h s l ( α / γ ) .
 ☐
In the following theorem, we prove the concavity of logical entropy h s l ( α ) as a function of s. By the symbol S ( M ) we will denote the family of all states defined on M. It is easy to verify that if s 1 , s 2 S ( M ) , then, for every real number λ [ 0 , 1 ] , it holds that λ s 1 + ( 1 λ ) s 2 S ( M ) .
Theorem 6.
Let α be a given partition of ( M , ) . Then, for every s 1 , s 2 S ( M ) , and for every real number λ [ 0 , 1 ] , it holds:
λ h s 1 l ( α ) + ( 1 λ ) h s 2 l ( α ) h λ s 1 + ( 1 λ ) s 2 l ( α ) .
Proof. 
Let α = ( a 1 , , a n ) . The function ϕ :    defined by ϕ ( x ) = x 2 , for every x , is convex, therefore, for every real number λ [ 0 , 1 ] and i = 1 , 2 , , n , we have:
( λ s 1 ( a i ) + ( 1 λ ) s 2 ( a i ) ) 2 λ ( s 1 ( a i ) ) 2 + ( 1 λ ) ( s 2 ( a i ) ) 2 .
Hence, we obtain:
i = 1 n ( λ s 1 ( a i ) + ( 1 λ ) s 2 ( a i ) ) 2 λ i = 1 n ( s 1 ( a i ) ) 2 + ( 1 λ ) i = 1 n ( s 2 ( a i ) ) 2 ,
and, consequently:
1 i = 1 n ( λ s 1 ( a i ) + ( 1 λ ) s 2 ( a i ) ) 2 1 λ i = 1 n ( s 1 ( a i ) ) 2 ( 1 λ ) i = 1 n ( s 2 ( a i ) ) 2 .
Therefore, we can write:
λ h s 1 l ( α ) + ( 1 λ ) h s 2 l ( α ) = λ [ 1 i = 1 n ( s 1 ( a i ) ) 2 ] + ( 1 λ ) [ 1 i = 1 n ( s 2 ( a i ) ) 2 ] = 1 λ i = 1 n ( s 1 ( a i ) ) 2 ( 1 λ ) i = 1 n ( s 2 ( a i ) ) 2 1 i = 1 n ( λ s 1 ( a i ) + ( 1 λ ) s 2 ( a i ) ) 2 = 1 i = 1 n ( λ s 1 + ( 1 λ ) s 2 ) ( a i ) ) 2 = h λ s 1 + ( 1 λ ) s 2 l ( α ) .
The result proves that the logical entropy s h s l ( α ) is concave on the class S ( M ) .  ☐

4. Logical Mutual Information in Product MV-Algebras

In this section, the previous results are exploited to introduce the concept of logical mutual information of partitions in product MV-algebras and its conditional version and to derive their properties. In particular, using the concept of logical conditional mutual information we formulate chain rules for the examined situation.
Definition 7.
Let ( M , ) be a product MV-algebra. The logical mutual information of partitions α and β in ( M , ) is defined by:
I s l ( α , β ) = h s l ( α ) h s l ( α / β ) .
Remark 6.
The inequality h s l ( α / β ) h s l ( α ) implies that the logical mutual information I s l ( α , β ) is always nonnegative. Since, by Equation (7), it holds h s l ( α / β ) = h s l ( α β ) h s l ( β ) , we also have the following identity:
I s l ( α , β ) = h s l ( α ) + h s l ( β ) h s l ( α β ) .
Thereafter we can see that I s l ( α , β ) = I s l ( β , α ) , and, due to the inequality h s l ( α ) h s l ( α β ) (Theorem 5, (ii)), we have I s l ( α , β ) min [ h s l ( α ) ;    h s l ( β ) ] .
Example 5.
Put M = { f ;     f :     [ 0 , 1 ] [ 0 , 1 ]     a r e    B o r e    m e a s u r a b l e } , and define in the class M the operation as the natural product of fuzzy sets. It is easy to see that M is a product MV-algebra. Further, we define a state s :    M [ 0 , 1 ] by the equality s ( f ) = 0 1 f ( x ) d x , for every f M . Let us consider the pairs α = ( a 1 , a 2 ) , β = ( b 1 , b 2 ) , where a 1 ( x ) = x , a 2 ( x ) = 1 x , b 1 ( x ) = x 2 ,   b 2 ( x ) = 1 x 2 , x [ 0 , 1 ] . It is obvious that α and β are partitions of M . Elementary calculations will show that they have the s-state values 1 2 , 1 2 and 1 3 , 2 3 of the corresponding elements, respectively, and the logical entropies h s l ( α ) = 1 2 , h s l ( β ) = 4 9 . The partition α β = ( a 1 b 1 ,   a 1 b 2 ,   a 2 b 1 ,   a 2 b 2 ) has the s-state values 1 4 , 1 4 , 1 12 , 5 12 of the corresponding elements, and the logical entropy:
h s l ( α β ) = 1 [ ( 1 4 ) 2 + ( 1 12 ) 2 + ( 1 4 ) 2 + ( 5 12 ) 2 ] 0.694444 .
Simple calculations will show that h s l ( α / β ) 0.555556 0.305556    = 0.25 . By Equation (11) we obtain the logical mutual information of partitions α and β :
I s l ( α , β ) = 0.5 0.25 = 0.25 .
One can verify that:
I s l ( α , β ) = h s l ( α ) + h s l ( β ) h s l ( α β ) .
Remark 7.
Let us remind that the product MV-algebra presented in the previous example represents an important class of fuzzy sets; it is called a full tribe of fuzzy sets (cf. [21,24,25]).
Theorem 7.
If partitions α and β of ( M , ) are statistically independent, i.e., s ( a b ) = s ( a ) s ( b ) , for every a α , b β , then:
I s l ( α , β ) = h s l ( α ) h s l ( β ) .
Proof. 
Let α = ( a 1 , , a k ) , β = ( b 1 , , b l ) . Using Equations (12) and (4) we obtain:
I s l ( α , β ) = 1 i = 1 k ( s ( a i ) ) 2 + 1 j = 1 l ( s ( b j ) ) 2 1 + i = 1 k j = 1 l ( s ( a i ) s ( b j ) ) 2 = [ 1 i = 1 k ( s ( a i ) ) 2 ] [ 1 j = 1 l ( s ( b j ) ) 2 ] = h s l ( α ) h s l ( β ) .
 ☐
As it is known, one of the most significant properties of Shannon entropy is additivity: if partitions A , B are statistically independent, then h S ( A B ) = h S ( A ) + h S ( B ) . Here, A B = { A B ;    A A , B B } . In the case of logical entropy, the following property applies.
Theorem 8.
If partitions α and β of ( M , ) are statistically independent, then:
1 h s l ( α β ) = ( 1 h s l ( α ) ) ( 1 h s l ( β ) ) .
Proof. 
As a consequence of Theorem 7 and Equation (12), we obtain:
( 1 h s l ( α ) ) ( 1 h s l ( β ) ) = 1 h s l ( α ) h s l ( β ) + h s l ( α ) h s l ( β ) = 1 h s l ( α ) h s l ( β ) + I s l ( α , β ) = 1 h s l ( α ) h s l ( β ) + h s l ( α ) + h s l ( β ) h s l ( α β ) = 1 h s l ( α β ) .
 ☐
In the following two theorems, using the concept of logical conditional mutual information, chain rules for logical mutual information in product MV-algebras are established.
Definition 8.
Let α , β , γ be partitions of ( M , ) . The logical conditional mutual information of α and β assuming a realization of γ is defined by:
I s l ( α , β / γ ) = h s l ( α / γ ) h s l ( α / β γ ) .
Remark 8.
It is easy to show that:
I s l ( α , β / γ ) = I s l ( β , α / γ ) .
Theorem 9.
For arbitrary partitions α , β , γ of ( M , ) , it holds:
I s l ( α β , γ ) = I s l ( α , γ ) + I s l ( β , γ / α ) .
Proof. 
Elementary calculations will show that:
I s l ( α , γ ) + I s l ( β , γ / α ) = h s l ( γ ) h s l ( γ / α ) + h s l ( γ / α ) h s l ( γ / α β ) = I s l ( α β , γ ) .
 ☐
Theorem 10.
Let α 1 , α 2 , , α n , γ be partitions of ( M , ) . Then:
I s l ( i = 1 n α i , γ ) = I s l ( α 1 , γ ) + i = 2 n I s l ( α i , γ / α 1 α i 1 ) .
Proof. 
It follows by applying Equations (11), (8), (10), and (13). ☐
Definition 9.
Let α , β , γ be partitions of ( M , ) . We say that α and γ are conditionally independent given β if I s l ( α , γ / β ) = h s l ( α / β ) h s l ( γ / β ) .
Theorem 11.
Let α , β , γ be partitions of ( M , ) . If α and γ are conditionally independent given β , then:
I s l ( α β , γ ) = I s l ( β , γ ) + h s l ( α / β ) h s l ( γ / β ) .
Proof. 
Using Theorem 9 we get:
I s l ( α β , γ ) = I s l ( β , γ ) + I s l ( α , γ / β ) = I s l ( β , γ ) + h s l ( α / β ) h s l ( γ / β ) .
 ☐

5. Logical Cross Entropy and Logical Divergence in Product MV-Algebras

In this section, we define the notions of logical cross entropy and logical divergence in product MV-algebras. The proposed notions are analogies of the concepts of logical cross entropy and logical divergence introduced by Ellerman in [50]. For illustration, we provide some numerical examples.
Definition 10.
Let α = ( a 1 , , a n ) be a partition in a product MV-algebra ( M , ) , and s , t S ( M ) . We define the logical cross entropy of states s , t with respect to α by the formula:
h α l ( s t ) = i = 1 n s ( a i ) ( 1 t ( a i ) ) .
Remark 9.
Since i = 1 n s ( a i ) = 1 , we can also write:
h α l ( s t ) = 1 i = 1 n s ( a i ) t ( a i ) .
Evidently, the logical cross entropy h α l ( s t ) is symmetric and it is always nonnegative. If states s ,     t are identical over α (i.e., s ( a i ) = t ( a i ) , for i = 1 , 2 , , n ), then h α l ( s t ) = h s l ( α ) .
Definition 11.
Let α = ( a 1 , , a n ) be a partition in a product MV-algebra ( M , ) , and s , t S ( M ) . We define the logical divergence of states s , t with respect to α by the formula:
d α l ( s t ) = 1 2 i = 1 n ( s ( a i ) t ( a i ) ) 2 .
Remark 10.
It is evident that d α l ( s t ) = d α l ( t s ) , and d α l ( s t ) 0 with the equality if and only if the states s ,     t are identical over α . As in the case of K-L divergence, the logical divergence is not a distance metric because it does not satisfy the triangle inequality (as shown in the example that follows). Notice that its square root (with or without the 1 2 factor) is a natural distance metric.
Example 6.
Consider any product MV-algebra ( M , ) and states s 1 , s 2 , s 3 defined on it. Let a M with s 1 ( a ) = p 1 , s 2 ( a ) = p 2 , s 3 ( a ) = p 3 , where p 1 , p 2 , p 3 ( 0 , 1 ) . Then s 1 ( u a ) = 1 p 1 , s 2 ( u a ) = 1 p 2 , and s 3 ( u a ) = 1 p 3 .    Put p 1 = 1 2 , p 2 = 1 3 , p 3 = 1 4 , and consider the partition α = ( a , u a ) of ( M , ) . Let us calculate:
d α l ( s 1 s 2 ) = 1 2 ( s 1 ( a ) s 2 ( a ) ) 2 + 1 2 ( s 1 ( u a ) s 2 ( u a ) ) 2 = ( p 1 p 2 ) 2 = 1 36 .
Analogously:
d α l ( s 1 s 3 ) = ( p 1 p 3 ) 2 = 1 16 , a n d d α l ( s 2 s 3 ) = ( p 2 p 3 ) 2 = 1 144 .
Evidently,
d α l ( s 1 s 3 ) > d α l ( s 1 s 2 ) +    d α l ( s 2 s 3 ) .
The result means that the triangle inequality of the logical divergence in product MV-algebras does not apply, in general.
Theorem 12.
Let α be a partition of a product MV-algebra ( M , ) . Then, for every states s , t defined on ( M , ) , it holds:
d α l ( s t ) = h α l ( s t ) 1 2 ( h s l ( α ) + h t l ( α ) ) .
Proof. 
Assume that α = ( a 1 , , a n ) . Let us calculate:
h α l ( s t ) 1 2 ( h s l ( α ) + h t l ( α ) ) = 1 i = 1 n s ( a i ) t ( a i ) 1 2 ( 1 i = 1 n ( s ( a i ) ) 2 ) 1 2 ( 1 i = 1 n ( t ( a i ) ) 2 ) = 1 2 i = 1 n ( s ( a i ) t ( a i ) ) 2 =    d α l ( s t ) .
 ☐
Remark 11.
As a simple consequence of the previous theorem and the logical information inequality d α l ( s t ) 0 (with the equality if and only if the states s ,     t are identical over α ) we get that h α l ( s t ) 1 2 ( h s l ( α ) + h t l ( α ) ) with the equality if and only if the states s ,     t are identical over α .
Example 7.
Consider the product MV-algebra ( M , ) from Example 4 and the real functions F 1 ,     F 2 defined by F 1 ( x ) = x ,     F 2 ( x ) = x 2 , for every x [ 0 , 1 ] . We define on the product MV-algebra ( M , ) two states s 1 ,     s 2 by the formulas:
s 1 ( I A ) = 0 1 I A ( x ) d F 1 ( x ) = 0 1 I A ( x ) d x ,
s 2 ( I A ) = 0 1 I A ( x ) d F 2 ( x ) = 0 1 I A ( x ) 2 x d x ,
for any element I A of M . The partition α = ( I [ 0 , 1 2 ] ,     I [ 1 2 , 1 ] ) has the s 1 -state values 1 2 , 1 2 of the corresponding elements, and the s 2 -state values 1 4 , 3 4 of the corresponding elements. Elementary calculations will show that h s 1 l ( α ) = 1 2 , and h s 2 l ( α ) = 3 8 . Further we get:
h α l ( s 1 s 2 ) = i = 1 2 s 1 ( a i ) ( 1 s 2 ( a i ) ) = 1 2 ,    a n d    d α l ( s 1 s 2 ) = 1 2 i = 1 2 ( s 1 ( a i ) s 2 ( a i ) ) 2 = 1 16 .
It is now possible to verify that:
d α l ( s 1 s 2 ) = h α l ( s 1 s 2 ) 1 2 ( h s 1 l ( α ) + h s 2 l ( α ) ) ,
and
h α l ( s 1 s 2 ) 1 2 ( h s 1 l ( α ) + h s 2 l ( α ) ) .

6. Conclusions

In [48], the authors introduced the concepts of mutual information and K-L divergence in product MV-algebras and derived the fundamental properties of these quantities. Naturally, the presented theory is based on the Shannon entropy function (Equation (1)). The aim of this paper was to construct a relevant theory on product MV-algebras for the case when the Shannon entropy function is replaced by the logical entropy function (Equation (2)). The main results of the paper are contained in Section 3, Section 4 and Section 5.
In Section 3, we have proposed the concepts of logical entropy and logical conditional entropy of partitions in product MV-algebras and examined their properties. Among others, the concavity of logical entropy has been proved. In Section 4, the notions of logical entropy and logical conditional entropy have been exploited to define the logical mutual information for the examined case of product MV-algebras. We have shown basic properties of these quantities. Moreover, chain rules for logical entropy and logical mutual information for the studied case of product MV-algebras were derived. In the final section, the notions of logical cross entropy and logical divergence in product MV-algebras were proposed. To illustrate the developed theory, several numerical examples are included in the paper.
As already mentioned in Section 4 (see Example 5), an important case of product MV-algebras is the full tribe M of fuzzy sets. We note that in [21] (see also [24,25]) the entropy of Shannon type on the full tribe M of fuzzy sets was examined. In a natural way, all results, based on the logical entropy function (2), provided by the theory developed in the paper may be applied also to the case of a full tribe of fuzzy sets.

Acknowledgments

The authors thank all anonymous reviewers for their valuable comments and suggestions which have significantly improved the quality and presentation of this paper. The authors are grateful to Constantine the Philosopher University in Nitra for covering the costs to publish in open access.

Author Contributions

All authors contributed significantly to the theoretical work, as well as to the creation of illustrative examples. Dagmar Markechová wrote the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Gray, R.M. Entropy and Information Theory; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  3. Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–358. [Google Scholar] [CrossRef]
  4. Piasecki, K. Fuzzy partitions of sets. BUSEFAL 1986, 25, 52–60. [Google Scholar]
  5. De Baets, B.; Mesiar, R. T-Partitions. Fuzzy Sets Syst. 1998, 97, 211–223. [Google Scholar] [CrossRef]
  6. Mesiar, R.; Reusch, B.; Thiele, H. Fuzzy equivalence relations and fuzzy partition. J. Mult. Valued Log. Soft Comput. 2006, 12, 167–181. [Google Scholar]
  7. Jayaram, B.; Mesiar, R. I-fuzzy equivalence relations and I-fuzzy partitions. Inf. Sci. 2009, 179, 1278–1297. [Google Scholar] [CrossRef]
  8. Montes, S.; Couso, I.; Gil, P. Fuzzy delta-epsilon partitions. Inf. Sci. 2003, 152, 267–285. [Google Scholar] [CrossRef]
  9. Montes, S.; Couso, I.; Gil, P. One-to-one correspondence between ε-partitions, (1 − ε)-equivalences and ε-pseudometrics. Fuzzy Sets Syst. 2001, 124, 87–95. [Google Scholar] [CrossRef]
  10. Dumitrescu, D. Fuzzy partitions with the connectives T , S . Fuzzy Sets Syst. 1992, 47, 193–195. [Google Scholar] [CrossRef]
  11. Dumitrescu, D. Fuzzy measures and entropy of fuzzy partitions. J. Math. Anal. Appl. 1993, 176, 359–373. [Google Scholar] [CrossRef]
  12. Riečan, B. An entropy construction inspired by fuzzy sets. Soft Comput. 2003, 7, 486–488. [Google Scholar]
  13. Markechová, D. The entropy of fuzzy dynamical systems and generators. Fuzzy Sets Syst. 1992, 48, 351–363. [Google Scholar] [CrossRef]
  14. Markechová, D. Entropy of complete fuzzy partitions. Math. Slovaca 1993, 43, 1–10. [Google Scholar]
  15. Markechová, D. Entropy and mutual information of experiments in the fuzzy case. Neural Netw. World 2013, 23, 339–349. [Google Scholar] [CrossRef]
  16. Mesiar, R. The Bayes principle and the entropy on fuzzy probability spaces. Int. J. Gen. Syst. 1991, 20, 67–72. [Google Scholar] [CrossRef]
  17. Mesiar, R.; Rybárik, J. Entropy of Fuzzy Partitions—A General Model. Fuzzy Sets Syst. 1998, 99, 73–79. [Google Scholar] [CrossRef]
  18. Rahimi, M.; Riazi, A. On local entropy of fuzzy partitions. Fuzzy Sets Syst. 2014, 234, 97–108. [Google Scholar] [CrossRef]
  19. Srivastava, P.; Khare, M.; Srivastava, Y.K. m-Equivalence, entropy and F-dynamical systems. Fuzzy Sets Syst. 2001, 121, 275–283. [Google Scholar] [CrossRef]
  20. Khare, M. Fuzzy σ-algebras and conditional entropy. Fuzzy Sets Syst. 1999, 102, 287–292. [Google Scholar] [CrossRef]
  21. Markechová, D.; Riečan, B. Entropy of Fuzzy Partitions and Entropy of Fuzzy Dynamical Systems. Entropy 2016, 18, 19. [Google Scholar] [CrossRef]
  22. Chang, C.C. Algebraic analysis of many valued logics. Trans. Am. Math. Soc. 1958, 88, 467–490. [Google Scholar] [CrossRef]
  23. Mundici, D. Advanced Łukasiewicz Calculus and MV-Algebras; Springer: Dordrecht, The Netherlands, 2011. [Google Scholar]
  24. Riečan, B.; Mundici, D. Probability on MV-algebras. In Handbook of Measure Theory; Pap, E., Ed.; Elsevier: Amsterdam, The Netherlands, 2002; pp. 869–910. [Google Scholar]
  25. Riečan, B.; Neubrunn, T. Integral, Measure and Ordering; Springer: Dordrecht, The Netherlands, 1997. [Google Scholar]
  26. Mundici, D. MV Algebras: A Short Tutorial. 2007. Available online: http://www.matematica.uns.edu.ar/IX CongresoMonteiro/Comunicaciones/Mundici_tutorial.pdf (accessed on 26 May 2007).
  27. Kroupa, T. Conditional probability on MV-algebras. Fuzzy Sets Syst. 2005, 149, 369–381. [Google Scholar] [CrossRef]
  28. Dvurečenskij, A.; Pulmannová, S. New Trends in Quantum Structures; Springer: Dordrecht, The Netherlands, 2000. [Google Scholar]
  29. Di Nola, A.; Dvurečenskij, A.; Hyčko, M.; Manara, C. Entropy on Effect Algebras with the Riesz Decomposition Property II: MV-Algebras. Kybernetika 2005, 41, 161–176. [Google Scholar]
  30. Riečan, B. Kolmogorov—Sinaj entropy on MV-algebras. Int. J. Theor. Phys. 2005, 44, 1041–1052. [Google Scholar] [CrossRef]
  31. Kôpka, F.; Chovanec, F. D-posets. Math. Slovaca 1994, 44, 21–34. [Google Scholar]
  32. Kôpka, F. Quasiproduct on Boolean D-posets. Int. J. Theor. Phys. 2008, 47, 26–35. [Google Scholar] [CrossRef]
  33. Frič, R. On D-posets of fuzzy sets. Math. Slovaca 2014, 64, 545–554. [Google Scholar] [CrossRef]
  34. Foulis, D.J.; Bennet, M.K. Effect algebras and unsharp quantum logics. Found. Phys. 1994, 24, 1331–1352. [Google Scholar] [CrossRef]
  35. Frič, R.; Papčo, M. Probability domains. Int. J. Theor. Phys. 2010, 49, 3092–3100. [Google Scholar] [CrossRef]
  36. Skřivánek, V.; Frič, R. Generalized random events. Int. J. Theor. Phys. 2015, 54, 4386–4396. [Google Scholar] [CrossRef]
  37. Di Nola, A.; Dvurečenskij, A.; Hyčko, M.; Manara, C. Entropy on Effect Algebras with the Riesz Decomposition Property I: Basic Properties. Kybernetika 2005, 41, 143–160. [Google Scholar]
  38. Giski, Z.E.; Ebrahimi, M. Entropy of Countable Partitions on effect Algebras with the Riesz Decomposition Property and Weak Sequential Effect Algebras. Cankaya Univ. J. Sci. Eng. 2015, 12, 20–39. [Google Scholar]
  39. Ebrahimi, M.; Mosapour, B. The Concept of Entropy on D-posets. Cankaya Univ. J. Sci. Eng. 2013, 10, 137–151. [Google Scholar]
  40. Riečan, B. On the product MV-algebras. Tatra Mt. Math. 1999, 16, 143–149. [Google Scholar]
  41. Montagna, F. An algebraic approach to propositional fuzzy logic. J. Log. Lang. Inf. 2000, 9, 91–124. [Google Scholar] [CrossRef]
  42. Jakubík, J. On product MV algebras. Czech. Math. J. 2002, 52, 797–810. [Google Scholar] [CrossRef]
  43. Di Nola, A.; Dvurečenskij, A. Product MV-algebras. Mult. Valued Log. 2001, 6, 193–215. [Google Scholar]
  44. Riečan, B. On the probability theory on product MV algebras. Soft Comput. 2000, 4, 49–57. [Google Scholar] [CrossRef]
  45. Vrábelová, M. A note on the conditional probability on product MV algebras. Soft Comput. 2000, 4, 58–61. [Google Scholar] [CrossRef]
  46. Petrovičová, J. On the entropy of partitions in product MV algebras. Soft Comput. 2000, 4, 41–44. [Google Scholar] [CrossRef]
  47. Petrovičová, J. On the entropy of dynamical systems in product MV algebras. Fuzzy Sets Syst. 2001, 121, 347–351. [Google Scholar] [CrossRef]
  48. Markechová, D.; Riečan, B. Kullback-Leibler Divergence and Mutual Information of Partitions in Product MV Algebras. Entropy 2017, 19, 267. [Google Scholar] [CrossRef]
  49. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  50. Ellerman, D. An Introduction to Logical Entropy and Its Relation to Shannon Entropy. Int. J. Semant. Comput. 2013, 7, 121–145. [Google Scholar] [CrossRef]
  51. Ellerman, D. Logical Information Theory: New Foundations for Information Theory. Log. J. IGPL 2017, 25, 806–835. [Google Scholar] [CrossRef]
  52. Tamir, B.; Cohen, E. Logical Entropy for Quantum States. arXiv, 2014; arXiv:1412.0616v2. [Google Scholar]
  53. Tamir, B.; Cohen, E. A Holevo-Type Bound for a Hilbert Schmidt Distance Measure. J. Quantum Inf. Sci. 2015, 5, 127–133. [Google Scholar] [CrossRef]
  54. Rao, C.R. Diversity and dissimilarity coefficients. A unified approach. Theor. Popul. Biol. 1982, 21, 24–43. [Google Scholar] [CrossRef]
  55. Good, I.J. Comment (on Patil and Taillie: Diversity as a Concept and its Measurement). J. Am. Stat. Assoc. 1982, 77, 561–563. Available online: https://doi.org/10.1080/01621459.1982.10477846 (accessed on 9 February 2018).
  56. Patil, G.P.; Taillie, C. Diversity as a Concept and its Measurement. J. Am. Stat. Assoc. 1982, 77, 548–561. [Google Scholar] [CrossRef]
  57. Rejewski, M. How Polish Mathematicians Deciphered the Enigma. Ann. Hist. Comput. 1981, 3, 213–234. [Google Scholar] [CrossRef]
  58. Markechová, D.; Riečan, B. Logical Entropy of Fuzzy Dynamical Systems. Entropy 2016, 18, 157. [Google Scholar] [CrossRef]
  59. Markechová, D.; Riečan, B. Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case. Entropy 2017, 19, 429. [Google Scholar] [CrossRef]
  60. Ebrahimzadeh, A.; Giski, Z.E.; Markechová, D. Logical Entropy of Dynamical Systems—A General Model. Mathematics 2017, 5, 4. [Google Scholar] [CrossRef]
  61. Ebrahimzadeh, A. Logical entropy of quantum dynamical systems. Open Phys. 2016, 14, 1–5. [Google Scholar] [CrossRef]
  62. Ebrahimzadeh, A. Quantum conditional logical entropy of dynamical systems. Ital. J. Pure Appl. Math. 2016, 36, 879–886. [Google Scholar]
  63. Ebrahimzadeh, A.; Jamalzadeh, J. Conditional logical entropy of fuzzy σ-algebras. J. Intell. Fuzzy Syst. 2017, 33, 1019–1026. [Google Scholar] [CrossRef]
  64. Giski, Z.E.; Ebrahimzadeh, A. An introduction of logical entropy on sequential effect algebra. Indag. Math. 2017, 28, 928–937. [Google Scholar] [CrossRef]
  65. Mohammadi, U. The Concept of Logical Entropy on D-posets. J. Algebraic Struct. Appl. 2016, 1, 53–61. [Google Scholar]
  66. Mundici, D. Interpretation of AFC*-algebras in Lukasiewicz sentential calculus. J. Funct. Anal. 1986, 56, 889–894. [Google Scholar]

Share and Cite

MDPI and ACS Style

Markechová, D.; Mosapour, B.; Ebrahimzadeh, A. Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras. Entropy 2018, 20, 129. https://doi.org/10.3390/e20020129

AMA Style

Markechová D, Mosapour B, Ebrahimzadeh A. Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras. Entropy. 2018; 20(2):129. https://doi.org/10.3390/e20020129

Chicago/Turabian Style

Markechová, Dagmar, Batool Mosapour, and Abolfazl Ebrahimzadeh. 2018. "Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras" Entropy 20, no. 2: 129. https://doi.org/10.3390/e20020129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop