Next Article in Journal
Noise Enhancement for Weighted Sum of Type I and II Error Probabilities with Constraints
Next Article in Special Issue
Multi-User Detection for Sporadic IDMA Transmission Based on Compressed Sensing
Previous Article in Journal
Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Information-Spectrum Approach to the Capacity Region of the Interference Channel †

1
School of Mathematics and Systems Science, Guangdong Polytechnic Normal University, Guangzhou 510665, China
2
School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China
3
Department of Electronic Engineering, City University of Hong Kong, Hong Kong 999077, China
4
Department of Computer Science, Jinan University, Guangzhou 510632, China
5
State Laboratory of ISN, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 2012 IEEE International Symposium on Information Theory, Cambridge, MA, USA, 1–6 July 2012.
Entropy 2017, 19(6), 270; https://doi.org/10.3390/e19060270
Submission received: 8 April 2017 / Revised: 2 June 2017 / Accepted: 10 June 2017 / Published: 13 June 2017
(This article belongs to the Special Issue Multiuser Information Theory)

Abstract

:
In this paper, a general formula for the capacity region of a general interference channel with two pairs of users is derived, which reveals that the capacity region is the union of a family of rectangles. In the region, each rectangle is determined by a pair of spectral inf-mutual information rates. The presented formula provides us with useful insights into the interference channels in spite of the difficulty of computing it. Specially, when the inputs are discrete, ergodic Markov processes and the channel is stationary memoryless, the formula can be evaluated by the BCJR (Bahl-Cocke-Jelinek-Raviv) algorithm. Also the formula suggests that considering the structure of the interference processes contributes to obtaining tighter inner bounds than the simplest one (obtained by treating the interference as noise). This is verified numerically by calculating the mutual information rates for Gaussian interference channels with embedded convolutional codes. Moreover, we present a coding scheme to approach the theoretical achievable rate pairs. Numerical results show that the decoding gains can be achieved by considering the structure of the interference.

1. Introduction

In wireless communications, since the electromagnetic spectrum is limited, frequency bands are often simultaneously used by several radio links that are not completely isolated [1]. When several pairs of senders and receivers share a common communication medium, the transmission of information from one sender to the corresponding receiver interferes with communications between the other senders and their receivers. This communication model is called interference channel (IC), which was firstly defined by Shannon in [2] and furthered by Ahlswede in [3]. A basic problem for the IC is to determine the rate pairs at which information can be reliably transmitted over the channel, that is, the capacity region. However, the problem of characterizing this region has been open for over 40 years. The capacity regions are known only in some special cases, such as strong, very strong and deterministic interference channels [4,5,6,7]. In decades, the researchers provided various inner and outer bounds of the capacity region for the general IC. For instance, two outer bounds on the capacity region of the Gaussian interference channel (GIFC) were derived in [8]. The first bound unifies and improves the outer bounds of Sato [9] and Carleial [10]. The second bound follows directly from the outer bounds in [11,12], which is deduced by considering a degraded GIFC and is even better than the first one for certain weak GIFCs. In 1981, Han and Kobayashi [5] proposed the best inner bound (the so-called HK region), which has been simplified by Kramer and Chong et al. in their independent works [13,14]. By introducing the idea of approximation, Etkin, Tse and Wang [15] showed that HK region [5] is within one bit of the capacity region for the GIFC.
In [16,17], a new computational model for the two-user GIFC was proposed, in which one pair of users (called primary users) are constrained to use a fixed encoder and the other pair of users (called secondary users) are allowed to optimize their code. The accessible capacity of the secondary users is defined as the maximum rate at which the secondary users can communicate reliably without degrading the performance of the primary users. Usually, the accessible capacity is higher than the maximum rate when treating the interference as noise, which is because the structure of the interference from the primary link has been taken into consideration in the computation, as is consistent with the spirit of [18,19]. However, in the computation of the accessible capacity [17], the primary link is allowed to have a non-neglected error probability. This makes the accessible capacity lower than the capacity region. As a result, the fixed-code constraints on the primary users will be relaxed in this paper. Namely, a pair of transmission rates at which both links can be asymptotically error-free will be calculated.
This paper is concerned with a more general IC, which is characterized by a sequence of transition probabilities (see Figure 1). By adopting the information spectrum approach [20,21], we present a general formula for the capacity region of the two-user general IC. From the formula, it can be seen that the capacity region is the union of a family of rectangles, in which each rectangle is determined by a spectral inf-mutual information rate pair. The information spectrum approach, which is based on the limit superior/inferior in probability of a sequence of random variables, has been proved to be powerful in characterizing the limit behavior of a general source/channel. For instance, in [20,22], Han and Verdú proved that the minimum compression rate for a general source equals its spectral sup-entropy rate and the maximum transmission rate for a general point-to-point channel equals its spectral inf-mutual information rate with an optimized input process. Also, the information spectrum approach can be used to derive the capacity region of a general multiple access channel [23]. For more applications of the information spectrum approach, see [21] and the references therein.
The structure of the paper is as follows. In Section 2, the definition of a general IC and the concept of the spectral inf-mutual information rate are introduced. Section 3.1 introduces the general formula for the capacity region proposed in [24]; while, in Section 3.2, a trellis-based algorithm is presented to compute the rate pairs for a stationary memoryless IC with discrete ergodic Markov sources. Section 3.3 presents the numerical results for a GIFC with binary-phase shift-keying (BPSK) modulation. Section 4 provides the detection and decoding algorithms for channels with structured interference. Section 5 concludes this paper.
In this paper, a random variable is denoted by an upper-case letter, say X, while its realization and sample space are denoted by x and X , respectively. The sequence of random variables with length n are denoted by X n , while its realization is denoted by x X n or x n X n . We use P X ( x ) to denote the probability mass function (pmf) of X if it is discrete or the probability density function (pdf) of X if it is continuous.

2. Basic Definitions and Problem Statement

2.1. General IC

As shown in Figure 1, a general interference channel W can be characterized by input alphabets X 1 , X 2 , output alphabets Y 1 , Y 2 and a sequence W = { W n ( · , · | · , · ) } n = 1 , in which W n : X 1 n × X 2 n Y 1 n × Y 2 n is a probability transition matrix. That is, for all n,
W n ( y 1 , y 2 | x 1 , x 2 ) 0 y 1 Y 1 n , y 2 Y 2 n W n ( y 1 , y 2 | x 1 , x 2 ) = 1 .
The marginal distributions W 1 n , W 2 n of the W n are given by
W 1 n ( y 1 | x 1 , x 2 ) = y 2 Y 2 n W n ( y 1 , y 2 | x 1 , x 2 ) ,
W 2 n ( y 2 | x 1 , x 2 ) = y 1 Y 1 n W n ( y 1 , y 2 | x 1 , x 2 ) .
Definition 1. 
An ( n , M n ( 1 ) , M n ( 2 ) , ε n ( 1 ) , ε n ( 2 ) ) code for the interference channel W consists of the following essentials:
(a) 
message sets:
M n ( 1 ) = { 1 , 2 , , M n ( 1 ) } , for Sender 1 M n ( 2 ) = { 1 , 2 , , M n ( 2 ) } , for Sender 2
(b) 
sets of codewords:
{ x 1 ( 1 ) , x 1 ( 2 ) , , x 1 ( M n ( 1 ) ) } X 1 n , for Encoder 1 { x 2 ( 1 ) , x 2 ( 2 ) , , x 2 ( M n ( 2 ) ) } X 2 n , for Encoder 2
For Sender 1 to transmit message i, Encoder 1 outputs the codeword x 1 ( i ) . Similarly, for Sender 2 to transmit message j, Encoder 2 outputs the codeword x 2 ( j ) .
(c) 
collections of decoding sets:
B 1 = { B 1 i Y 1 n } i = 1 , . . . , M n ( 1 ) , for Decoder 1 B 2 = { B 2 j Y 2 n } j = 1 , . . . , M n ( 2 ) , for Decoder 2
where Y 1 n = i = 1 M n ( 1 ) B 1 i , B 1 i B 1 i = for i i and Y 2 n = j = 1 M n ( 2 ) B 2 j , B 2 j B 2 j = for j j . That is, B 1 and B 2 are the disjoint partitions of Y 1 n and Y 2 n determined in advance, respectively. After receiving y 1 , Decoder 1 outputs i ^ whenever y 1 B 1 i ^ . Similarly, after receiving y 2 , Decoder 2 outputs j ^ whenever y 2 B 2 j ^ .
(d) 
probabilities of decoding errors:
ε n ( 1 ) = 1 M n ( 1 ) M n ( 2 ) i = 1 M n ( 1 ) j = 1 M n ( 2 ) W 1 n ( B 1 i c | x 1 ( i ) , x 2 ( j ) ) , ε n ( 2 ) = 1 M n ( 1 ) M n ( 2 ) i = 1 M n ( 1 ) j = 1 M n ( 2 ) W 2 n ( B 2 j c | x 1 ( i ) , x 2 ( j ) ) ,
where " c " denotes the complement of a set. Here we have assumed that each message of i M n ( 1 ) and j M n ( 2 ) is produced independently with uniform distribution.
Remark 1.
It is optimal to minimize the probability of errors so that the decoding sets B 1 i and B 2 j are defined according to the the maximum likelihood decoding [25]. Namely,
i ^ = arg max i Pr { y 1 | x 1 ( i ) }
and
j ^ = arg max j Pr { y 2 | x 2 ( j ) }
are selected as the estimates of the transmitted messages by the two receivers, respectively.
Definition 2.
A rate pair ( R 1 , R 2 ) is achievable if there exists a sequence of ( n , M n ( 1 ) , M n ( 2 ) , ε n ( 1 ) , ε n ( 2 ) ) codes such that
lim n ε n ( 1 ) = 0 and lim n ε n ( 2 ) = 0 , lim inf n log M n ( 1 ) n R 1 and lim inf n log M n ( 2 ) n R 2 .
Definition 3.
The set of all achievable rates is called the capacity region of the interference channel W , which is denoted by C ( W ) .

2.2. Preliminaries of Information-Spectrum Approach

We introduce the notions in [21] as follows.
Definition 4 (liminf in probability).
For a sequence of random variables { Z n } n = 1 ,
p - lim inf n Z n = sup { β | lim n Pr { Z n < β } = 0 } .
Definition 5.
If two random variables sequences X 1 = { X 1 n } n = 1 and X 2 = { X 2 n } n = 1 satisfy that
P X 1 n X 2 n ( x 1 , x 2 ) = P X 1 n ( x 1 ) P X 2 n ( x 2 )
for all x 1 X 1 n , x 2 X 2 n and n, they are called independent and denoted by X 1 X 2 .
Similar to [20], we give
Definition 6.
Let S I = { ( X 1 , X 2 ) | X 1 X 2 } . Given an ( X 1 , X 2 ) S I , for the interference channel W , we define the spectral inf-mutual information rate by
I ̲ ( X 1 ; Y 1 ) p - lim inf n 1 n log P Y 1 n | X 1 n ( Y 1 n | X 1 n ) P Y 1 n ( Y 1 n ) ,
I ̲ ( X 2 ; Y 2 ) p - lim inf n 1 n log P Y 2 n | X 2 n ( Y 2 n | X 2 n ) P Y 2 n ( Y 2 n ) ,
where
P Y 1 n | X 1 n ( y 1 | x 1 ) = x 2 , y 2 P X 2 n ( x 2 ) W n ( y 1 , y 2 | x 1 , x 2 ) ,
P Y 2 n | X 2 n ( y 2 | x 2 ) = x 1 , y 1 P X 1 n ( x 1 ) W n ( y 1 , y 2 | x 1 , x 2 ) .

3. The Capacity Region of General IC

In this section, we firstly present without proof the formula for the capacity region C ( W ) of the general IC derived in [24]. Also the algorithm to compute achievable rate pairs is presented.

3.1. The Main Theorem

Theorem 1.
The capacity region C ( W ) of the interference channel W is given by
C ( W ) = ( X 1 , X 2 ) S I R W ( X 1 , X 2 ) ,
where R W ( X 1 , X 2 ) is defined as the collection of all ( R 1 , R 2 ) satisfying that
0 R 1 I ̲ ( X 1 ; Y 1 ) ,
0 R 2 I ̲ ( X 2 ; Y 2 ) .

3.2. The Algorithm to Compute Achievable Rate Pairs

Theorem 1 provides a general formula for the capacity region of a general IC. However, it is usually difficult to compute the spectral inf-mutual information rates given in (9) and (10). In order to get insights into the interference channels, we make the following assumptions:
(1)
the channel is stationary and memoryless, that is, the transition probability of the channel can be written as
W n ( y 1 , y 2 | x 1 , x 2 ) = i = 1 n W ( y 1 , i , y 2 , i | x 1 , i , x 2 , i ) ;
(2)
sources are restricted to be stationary and ergodic discrete Markov processes.
With the above assumptions, the spectral inf-mutual information rates are reduced as
I ̲ ( X 1 ; Y 1 ) = lim n 1 n I ( X 1 n ; Y 1 n ) ,
I ̲ ( X 2 ; Y 2 ) = lim n 1 n I ( X 2 n ; Y 2 n ) ,
which can be evaluated by the Monte Carlo method [26,27,28] using BCJR algorithm [29] over a trellis. Actually, any stationary and ergodic discrete Markov source can be depicted by a time-invariant trellis. That is, a trellis section can uniquely specify the source. A trellis section is composed of left (or starting) states and right (or ending) states, which are connected by branches in between. For example, Source x 1 can be specified by a trellis T 1 as follows.
  • Both the left and right states are selected from the set S 1 = { 0 , 1 , . . . , | S 1 | - 1 } ;
  • Each branch is represented by a three-tuple b = ( s 1 - ( b ) , x 1 ( b ) , s 1 + ( b ) ) , where s 1 - ( b ) is the left state, s 1 + ( b ) is the right state, and the symbol x 1 ( b ) X 1 is the associated label. We also assume that a branch b is uniquely determined by s 1 - ( b ) and x 1 ( b ) ;
  • At time t = 0 , the source starts from state s 1 , 0 S 1 . If at time t - 1 ( t > 0 ) , the source is in the state s 1 , t - 1 S 1 , then at time t ( t > 0 ) , the source generates a symbol x 1 , t X 1 according to the conditional probability P ( x 1 , t | s 1 , t - 1 ) and goes into a state s 1 , t S 1 such that ( s 1 , t - 1 , x 1 , t , s 1 , t ) is a branch. Obviously, when the source runs from time t = 0 to t = n , a sequence x 1 , 1 , x 1 , 2 , . . . , x 1 , n is generated. The Markov property says that
    P ( x 1 , t | x 1 , 1 , . . . , x 1 , t - 1 , s 1 , 0 ) = P ( x 1 , t | s 1 , t - 1 ) .
    So the probability of a given sequence x 1 , 1 , x 1 , 2 , . . . , x 1 , n with the initial state s 1 , 0 can be factored as
    P ( x 1 , 1 , x 1 , 2 , . . . , x 1 , n | s 1 , 0 ) = t = 1 n P ( x 1 , t | s 1 , t - 1 ) .
Similarly, we can represent x 2 by a trellis T 2 with the state set S 2 = { 0 , 1 , . . . , | S 2 | - 1 } . Each branch is denoted by b = ( s 2 - ( b ) , x 2 ( b ) , s 2 + ( b ) ) , where s 2 - ( b ) is the left state, s 2 + ( b ) is the right state and the symbol x 2 ( b ) X 2 is the associated label. Assume that source x 2 starts from the state s 2 , 0 S 2 . If at time t - 1 ( t > 0 ) , the source is in the state s 2 , t - 1 S 2 , then at time t ( t > 0 ) , the source generates a symbol x 2 , t X 2 according to the conditional probability P ( x 2 , t | s 2 , t - 1 ) and goes into a state s 2 , t S 2 such that ( s 2 , t - 1 , x 2 , t , s 2 , t ) is a branch. The probability of a given sequence x 2 , 1 , x 2 , 2 , . . . , x 2 , n can be factored as
P ( x 2 , 1 , x 2 , 2 , . . . , x 2 , n | s 2 , 0 ) = t = 1 n P ( x 2 , t | s 2 , t - 1 ) .
For simplicity, the initial states have been fixed as s 1 , 0 = 0 and s 2 , 0 = 0 , which can be removed from the equations.
Next we focus on the evaluation of lim n 1 n I ( X 1 n ; Y 1 n ) , while lim n 1 n I ( X 2 n ; Y 2 n ) can be estimated similarly. Specifically, we can express the limit as
lim n 1 n I ( X 1 n ; Y 1 n ) = lim n 1 n H ( Y 1 n ) - lim n 1 n H ( Y 1 n | X 1 n ) ,
where lim n 1 n H ( Y 1 n ) and lim n 1 n H ( Y 1 n | X 1 n ) can be estimated by similar methods (For continuous y 1 , the computations of lim n 1 n h ( Y 1 n ) and lim n 1 n h ( Y 1 n | X 1 n ) can be implemented by substituting pdf for pmf). As an example, we show how to compute lim n 1 n H ( Y 1 n ) . According to the Shannon-McMillan-Breiman theorem [30], it can be seen that, with probability 1,
lim n - 1 n log P ( y 1 n ) = lim n 1 n H ( Y 1 n ) ,
where y 1 n stands for ( y 1 , 1 , y 1 , 2 , . . . , y 1 , n ) . Then evaluating lim n 1 n H ( Y 1 n ) is converted to computing
lim n - 1 n log P ( y 1 n ) - 1 n log t = 1 n P ( y 1 , t | y 1 t - 1 ) = - 1 n t = 1 n log P ( y 1 , t | y 1 t - 1 )
for a sufficiently long typical sequence y 1 n . Here, the key is to compute the conditional probabilities P ( y 1 , t | y 1 t - 1 ) for all t. Since both y 1 and y 2 are hidden Markov sequences, this can be done by performing the BCJR algorithm over the following product trellis.
  • The product trellis has the state set S = S 1 × S 2 , where “×” denotes Cartesian product.
  • Each branch is represented by a four-tuple b = ( s - ( b ) , x 1 ( b ) , x 2 ( b ) , s + ( b ) ) , where s - ( b ) = ( s 1 - ( b ) , s 2 - ( b ) ) is the left state, s + ( b ) = ( s 1 + ( b ) , s 2 + ( b ) ) is the right state. Then x 1 ( b ) X 1 and x 2 ( b ) X 2 are the associated labels in branch b such that ( s 1 - ( b ) , x 1 ( b ) , s 1 + ( b ) ) and ( s 2 - ( b ) , x 2 ( b ) , s 2 + ( b ) ) are branches in T 1 and T 2 , respectively.
  • At time t = 0 , the sources start from state s 0 = ( s 1 , 0 , s 2 , 0 ) S . If at time t - 1 ( t > 0 ) , the sources are in the state s t - 1 = ( s 1 , t - 1 , s 2 , t - 1 ) S , then at time t ( t > 0 ) , the sources generate symbols ( x 1 , t X 1 , x 2 , t X 2 ) according to the conditional probability P ( x 1 , t | s 1 , t - 1 ) P ( x 2 , t | s 2 , t - 1 ) and go into a state s t = ( s 1 , t , s 2 , t ) S 2 such that ( s t - 1 , x 1 , t , x 2 , t , s t ) is a branch.
The following description shows how to compute P ( y 1 , t | y 1 t - 1 ) by performing the BCJR algorithm. Given the received sequence y 1 , we define
  • Branch metrics: To each branch b t = { s t - 1 , x 1 , t , x 2 , t , s t } , we assign a metric
    (14) ρ ( b t ) = P ( b t | s t - 1 ) P ( y 1 , t | x 1 , t x 2 , t ) (15) = P ( x 1 , t | s 1 , t - 1 ) P ( x 2 , t | s 2 , t - 1 ) P ( y 1 , t | x 1 , t x 2 , t ) ,
    In the computation of lim n 1 n H ( Y 1 n | X 1 n ) , the metric is replaced by P ( b t | s t - 1 , x 1 , t ) P ( y 1 , t | x 1 , t x 2 , t ) .
  • State transition probabilities: The transition probability from s t - 1 to s t is defined as
    (16) γ t ( s t - 1 , s t ) = P ( s t , y 1 , t | s t - 1 ) (17) = b t : s - ( b t ) = s t - 1 , s + ( b t ) = s t ρ ( b t ) .
  • Forward recursion variables: We define the a posteriori probabilities
    α t ( s t ) = P ( s t | y 1 t ) , t = 0 , 1 , . . . n .
    Then
    P ( y 1 , t | y 1 t - 1 ) = s t - 1 , s t α ( s t - 1 ) γ t ( s t - 1 , s t ) ,
    where the values of α t ( s t ) can be computed recursively by
    α t ( s t ) = s t - 1 α t - 1 ( s t - 1 ) γ t ( s t - 1 , s t ) s t - 1 , s t α t - 1 ( s t - 1 ) γ t ( s t - 1 , s t ) .
In summary, the algorithm to estimate the entropy rate lim n 1 n H ( Y 1 n ) is described as follows.
Algorithm 1. 
1. 
Initializations: Choose a sufficiently large number n. Set the initial state of the trellis to be s 0 = 0 . The forward recursion variables are initialized as α 0 ( s ) = 1 if s = 0 and otherwise α 0 ( s ) = 0 .
2. 
Simulations for Sender 1: Generate a Markov sequence x 1 = ( x 1 , 1 , x 1 , 2 , . . . , x 1 , n ) according to the trellis T 1 of source x 1 .
3. 
Simulations for Sender 2: Generate a Markov sequence x 2 = ( x 2 , 1 , x 2 , 2 , . . . , x 2 , n ) according to the trellis T 2 of source x 2 .
4. 
Simulations for Receiver 1: Generate the received sequence y 1 according to the transition probability W n ( y 1 , y 2 | x 1 , x 2 ) .
5. 
Computations:
(a) 
For t = 1 , 2 , . . . , n , compute the values of P ( y 1 , t | y 1 t - 1 ) and α t ( s t ) recursively according to Equations (19) and (20).
(b) 
Evaluate the entropy rate
lim n 1 n H ( Y 1 n ) = - 1 n t = 1 n log P ( y 1 , t | y 1 t - 1 ) .
Similarly, we can evaluate the entropy rate lim n 1 n H ( Y 1 n | X 1 n ) . Therefore, we obtain the achievable rate I ̲ ( X 1 ; Y 1 ) = lim n 1 n I ( X 1 n ; Y 1 n ) .

3.3. Numerical Results

We consider the GIFC as shown in Figure 2, where the channel inputs x 1 ( i ) and x 2 ( j ) are BPSK sequences with power constraints P 1 and P 2 , respectively; the additive noises n 1 and n 2 are sequences of independent and identically distributed (i.i.d.) standard Gaussian random variables, which are assumed to be independent of the the channel inputs x 1 ( i ) and x 2 ( j ) ; constant a represents the gain of the interference link; the channel outputs y 1 and y 2 are
y 1 = x 1 ( i ) + a x 2 ( j ) + n 1 ,
y 2 = x 2 ( j ) + a x 1 ( i ) + n 2 .
We assume that x 1 and x 2 are the outputs from two (possibly different) generalized trellis encoders driven by independent and uniformly distributed (i.u.d.) input sequences, as proposed in [16]. As examples, we consider two input processes. One is referred to as “UnBPSK”, standing for an i.u.d. BPSK sequence; the other is referred to as “CcBPSK”, standing for an output sequence from the convolutional encoder with the generator matrix G ( D ) = [ 1 + D + D 2 1 + D 2 ] driven by an i.u.d. input sequence.
When Sender 1 uses CcBPSK and Sender 2 uses UnBPSK, the trellis representation of the scheme can be seen in Figure 3. The numerical results are presented in Figure 4. There are three rectangles, OECH, ODBG and OFAI, each of which is determined by a pair of spectral inf-mutual information rates. Specifically, the rectangle OECH corresponds to the case when both senders use UnBPSK as inputs; the rectangle ODBG corresponds to the case when Sender 1 uses UnBPSK as input and Sender 2 uses CcBPSK as input; and the rectangle OFAI corresponds to the case when Sender 1 uses CcBPSK as input and Sender 2 uses UnBPSK as input. The point “A” can be achieved by a coding scheme, in which Sender 1 uses a binary linear (coset) code concatenated with the convolutional code and Sender 2 uses a binary linear code, while the point “B” can be achieved similarly. By time-sharing scheme, the points on the line “AB” can be achieved. The point “C” states the limit when the two senders use binary linear codes but take the interference as an i.u.d. additive (BPSK) noise. It is obvious that the area of the pentagonal region ODBAI is greater than that of the rectangle OECH, which hints that the bandwidth-efficiency can be potentially improved by knowing the structure of the interference.

4. Decoding Algorithms for Channels with Structured Interference

The purpose of this section has two-folds. The first is to show the decoding gain achieved by taking into account the structure of the interference. The second is to present a coding scheme to approach the point “B” in Figure 4.

4.1. A Coding Scheme

We design a coding scheme using Kite codes (The main reason that we choose Kite codes is that it is convenient to set up the code rates. Actually, given data length, the code rates of Kite codes can be “continuously” varying from 0 . 1 to 0 . 9 with satisfactory performance, as shown in [31,32].). Kite codes are a class of low-density parity-check (LDPC) codes, which can be decoded using the sum-product algorithm (SPA) [33,34]. As shown in Figure 5, Sender 1 uses a Kite code (with a parity-check matrix H 1 ) and Sender 2 uses a Kite code (with a parity-check matrix H 2 ) concatenated with the convolutional code with the generator matrix
G ( D ) = [ 1 + D + D 2 1 + D 2 ] .
Encoding: For Sender 1, a binary sequence u 1 = ( u 1 , 1 , u 1 , 2 , . . . , u 1 , L 1 ) of length L 1 is encoded by a Kite code into a coded sequence c 1 = ( c 1 , 1 , c 1 , 2 , . . . , c 1 , N ) of length N. For Sender 2, a binary sequence u 2 = ( u 2 , 1 , u 2 , 2 , . . . , u 2 , L 2 ) of length L 2 is firstly encoded by a Kite code into a sequence v 2 = ( v 2 , 1 , v 2 , 2 , . . . , v 2 , N ) of length N and then the sequence v 2 is encoded by the convolutional code with the generator matrix G ( D ) into a coded sequence c 2 = ( c 2 , 1 , c 2 , 2 , . . . , c 2 , N ) of length N.
Modulation: The codewords c k are mapped into the bipolar sequences x k = ( x k , 1 , x k , 2 , . . . , x k , N ) with x k , i = P k ( 1 - 2 c k , i ) ( k = 1 , 2 ), where P k is the power. Then we transmit x k for k = 1 , 2 over the interference channel.
Decoding: After receiving y 1 , Receiver 1 tries to recover the transmitted message u 1 . Similarly, after receiving y 2 , Receiver 2 tries to recover the transmitted message u 2 . We will consider several decoding algorithms in the next subsection to recover the transmitted messages.

4.2. Decoding Algorithms

In this subsection, depending on the knowledge about the interference, we design four decoding schemes, including “knowing only the power of the interference”, “knowing the signaling of the interference”, “knowing the CC” and “knowing the whole structure”. We focus on the decoding of Receiver 1, while the decoding of Receiver 2 can be implemented similarly (There is no decoding scheme “Knowing the CC” for User 2 because User 1 has no convolutional structure). All these decoding algorithms will be described as message processing/passing algorithms over normal graphs [35].

4.2.1. Message Processing/Passing Algorithms over Normal Graphs

Figure 6 shows a normal graph consisting of edges and vertices, which represent variables and subsystem constraints, respectively. Let Z = Z 1 , Z 2 , , Z n be n distinct random variables that form a subsystem S ( 0 ) . A normal subgraph with edges representing Z and a vertex S ( 0 ) representing the constraints can be used to depict the subsystem. Each half-edge (ending with a dongle) may potentially be coupled to some half-edge in other subsystems. For instance, Z 1 and Z m are displayed to be connected to subsystems S ( 1 ) and S ( m ) , respectively. We call the corresponding edge full-edge. A message is associated with each edge, which is defined as the pmf/pdf of the corresponding variable. As in [36], we use the notation P Z i ( S ( i ) S ( 0 ) ) ( z ) to denote the message from S ( i ) to S ( 0 ) . In particular, we use the notation P Z i ( | S ( 0 ) ) ( z ) to represent the initial messages “driving” the subsystem S ( 0 ) . For example, such initial messages can be the a priori probabilities from the source or the a posteriori probabilities computed from the channel observations. Assume that all messages to S ( 0 ) are available. The vertex S ( 0 ) , as a message processor, delivers the outgoing message with respect to any given Z i by computing the likelihood function
P Z i ( S ( 0 ) S ( i ) ) ( z ) Pr S ( 0 ) is   satisfied   | Z i = z , z Z
by considering all the available messages and the system constraints. Since the computation of the likelihood function is irrelevant to the incoming message P Z i ( S ( i ) S ( 0 ) ) ( z ) , we claim that P Z i ( S ( 0 ) S ( i ) ) ( z ) is exactly the so-called extrinsic message.

4.2.2. Knowing Only the Power of the Interference

The decoding scheme for “knowing only the power of the interference” is the simplest one, which can be described as a message processing/passing algorithm over the normal graph as shown in Figure 7a. In this scheme, the interference from Sender 2 is treated as a Gaussian distribution with mean zero and variance a P 2 , where “ P 2 ” is the power and “a” is the square of interference coefficient. That is, Receiver 1 assumes that X 2 , j N ( 0 , a P 2 ) for j = 1 , 2 , , N . Since N 1 , j N ( 0 , 1 ) for j = 1 , 2 , , N , the decoding algorithm is initialized by the initial messages as follows
P C 1 , j ( Σ 1 K 1 ) ( c ) = Pr C 1 , j = c | y 1 , X 2 , j N ( 0 , a P 2 ) , j = 1 , 2 , , N 1 2 π ( 1 + a P 2 ) exp - [ y 1 , j - P 1 ( 1 - 2 c ) ] 2 2 ( 1 + a P 2 ) , c F 2
for j = 1 , 2 , , N . Then the decoding algorithm uses SPA to compute iteratively the extrinsic messages P U 1 , i ( K 1 | ) and P C 1 , j ( K 1 Σ 1 ) . Once these are done, we make the following decisions:
u ^ 1 , i = 0 , if P U 1 , i ( | K 1 ) ( 0 ) P U 1 , i ( K 1 | ) ( 0 ) > P U 1 , i ( | K 1 ) ( 1 ) P U 1 , i ( K 1 | ) ( 1 ) , 1 , otherwise .
c ^ 1 , j = 0 , if P C 1 , j ( Σ 1 K 1 ) ( 0 ) P C 1 , j ( K 1 Σ 1 ) ( 0 ) > P C 1 , j ( Σ 1 K 1 ) ( 1 ) P C 1 , j ( K 1 Σ 1 ) ( 1 ) , 1 , otherwise .
for i = 1 , 2 , , L 1 and j = 1 , 2 , , N . The details about the decoding algorithm are shown as below.
Algorithm 2 
(“knowing only the power of the interference”).
  • Initialization:
    1. 
    Initialize P U 1 , i ( | K 1 ) ( u ) = 1 2 for i = 1 , 2 , , L 1 and u F 2 .
    2. 
    Compute P C 1 , j ( Σ 1 K 1 ) ( c ) for j = 1 , 2 , , N and c F 2 according to (24).
    3. 
    Set a maximum iteration number J and iteration variable j = 1 .
  • Repeat while j J :
    1. 
    Compute extrinsic messages P U 1 , i ( K 1 | ) and P C 1 , j ( K 1 Σ 1 ) for i = 1 , 2 , , L 1 and j = 1 , 2 , , N using SPA.
    2. 
    Make decisions according to (25) and (26). Denote u ^ 1 = u ^ 1 , 1 , u ^ 1 , 2 , , u ^ 1 , L 1 and c ^ 1 = c ^ 1 , 1 , c ^ 1 , 2 , , c ^ 1 , N .
    3. 
    Compute the syndrome S 1 = c ^ 1 · H 1 T . If S 1 = 0 , output u ^ 1 and c ^ 1 and exit the iteration.
    4. 
    Set j = j + 1 . If S 1 0 and j > J , report a decoding failure.
  • End decoding.

4.2.3. Knowing the Signaling of the Interference

The decoding algorithm for this scheme is almost the same as Algorithm 2, see Figure 7a. The difference is that X 2 , j B ( 1 / 2 ) (Bernoulli-1/2 distribution. Strictly speaking, X 2 , j is a shift/scaling version of B ( 1 / 2 ) .) for j = 1 , 2 , , N . So the computation of P C 1 , j ( Σ 1 K 1 ) ( c ) is changed into
P C 1 , j ( Σ 1 K 1 ) ( c ) = Pr C 1 , j = c | y 1 , X 2 , j B ( 1 / 2 ) , j = 1 , 2 , , N 1 2 1 2 π exp - y 1 , j - P 1 ( 1 - 2 c ) - a P 2 2 2 + 1 2 1 2 π exp - y 1 , j - P 1 ( 1 - 2 c ) + a P 2 2 2 , c F 2
Then the decoding algorithm of “knowing the signaling of the interference” can be shown as below.
Algorithm 3 
(“knowing the signaling of the interference”).
  • Initialization:
    1. 
    Initialize P U 1 , i ( | K 1 ) ( u ) = 1 2 for i = 1 , 2 , , L 1 and u F 2 .
    2. 
    Compute P C 1 , j ( Σ 1 K 1 ) ( c ) for j = 1 , 2 , , N and c F 2 according to (27).
    3. 
    Set a maximum iteration number J and iteration variable j = 1 .
  • Repeat while j J :
    1. 
    Compute extrinsic messages P U 1 , i ( K 1 | ) and P C 1 , j ( K 1 Σ 1 ) for i = 1 , 2 , , L 1 and j = 1 , 2 , , N using SPA.
    2. 
    Make decisions according to (25) and (26), respectively.
    3. 
    Compute the syndrome S 1 = c ^ 1 · H 1 T . If S 1 = 0 , output u ^ 1 and c ^ 1 and exit the iteration.
    4. 
    Set j = j + 1 . If S 1 0 and j > J , report a decoding failure.
  • End decoding.

4.2.4. Knowing the CC

“Knowing the CC” means that Decoder 1 knows the structure of the convolutional code. This scheme can be described as a message processing/passing algorithm over the normal graph as shown in Figure 7b. Actually, the vertex T 1 is a combination of three subsystems, convolutional encoder, modulation and GIFC constraint, which can be specified by a trellis T with parallel branches [16]. Therefore, the BCJR algorithm can be used to compute the extrinsic messages P C 1 , j ( T 1 K 1 ) ( c ) for j = 1 , 2 , , N over the trellis T . Since the structure of Kite code for Sender 2 is unknown, the constraint of vertex K 2 is inactive. In this case, the pmf of variable V 2 , k ( k = 1 , 2 , , N ) is assumed to be Bernoulli-1/2 distribution. There are two strategies to implement the BCJR algorithm. One is called “BCJR-once”, in which the BCJR algorithm is performed only once. The other strategy is called “BCJR-repeat”, in which the BCJR algorithm is performed more than once. In this scheme, the decoding decisions on C 1 , j are modified into
c ^ 1 , j = 0 , if P C 1 , j ( T 1 K 1 ) ( 0 ) P C 1 , j ( K 1 T 1 ) ( 0 ) > P C 1 , j ( T 1 K 1 ) ( 1 ) P C 1 , j ( K 1 T 1 ) ( 1 ) , 1 , otherwise ,
for j = 1 , 2 , , N . These two decoding procedures are described in Algorithms 4 and 5, respectively.
Algorithm 4 
(BCJR-once).
  • Initialization:
    1. 
    Initialize pmf P C 1 , j ( K 1 T 1 ) c = 1 2 and P C 2 , j ( | T 1 ) c = 1 2 for j = 1 , 2 , , N , c F 2 and P V 2 , k ( | T 1 ) v = 1 2 for k = 1 , 2 , , N , v F 2 .
    2. 
    Compute extrinsic messages P C 1 , j ( T 1 K 1 ) c for j = 1 , 2 , , N , c F 2 using BCJR algorithm over the parallel branch trellis T .
    3. 
    Set a maximum iteration number J and iteration variable j = 1 .
  • Repeat while j J :
    1. 
    Compute extrinsic messages P U 1 , i ( K 1 | ) and P C 1 , j ( K 1 T 1 ) for i = 1 , 2 , , L 1 and j = 1 , 2 , , N using SPA.
    2. 
    Make decisions according to (25) and (28).
    3. 
    Compute the syndrome S 1 = c ^ 1 · H 1 T . If S 1 = 0 , output u ^ 1 and c ^ 1 and exit the iteration.
    4. 
    Set j = j + 1 . If S 1 0 and j > J , report a decoding failure.
  • End Decoding
Algorithm 5 
(BCJR-repeat).
  • Initialization:
    1. 
    Initialize pmf P C 1 , j ( K 1 T 1 ) c = 1 2 and P C 2 , j ( | T 1 ) c = 1 2 for j = 1 , 2 , , N , c F 2 and P V 2 , k ( | T 1 ) v = 1 2 for k = 1 , 2 , , N , v F 2 .
    2. 
    Set a maximum iteration number J and iteration variable j = 1 .
  • Repeat while j J :
    1. 
    Compute extrinsic messages P C 1 , j ( T 1 K 1 ) c for j = 1 , 2 , , N , c F 2 using BCJR algorithm over the parallel branch trellis T .
    2. 
    Compute extrinsic messages P U 1 , i ( K 1 | ) and P C 1 , j ( K 1 T 1 ) for i = 1 , 2 , , L 1 and j = 1 , 2 , , N using SPA.
    3. 
    Make decisions according to (25) and (28).
    4. 
    Compute the syndrome S 1 = c ^ 1 · H 1 T . If S 1 = 0 , output u ^ 1 and c ^ 1 and exit the iteration.
    5. 
    Set j = j + 1 . If S 1 0 and j > J , report a decoding failure.
  • End Decoding

4.2.5. Knowing the Whole Structure

The scheme “knowing the whole structure” for Receiver 1 can also be described as a message processing/passing algorithm over the normal graph shown in Figure 7b. Since knowing the whole structure of the interference, Receiver 1 can decode iteratively utilizing the structure of both users. Using the BCJR algorithm, P C 1 , j ( T 1 K 1 ) c and P V 2 , k ( T 1 K 2 ) v are computed simultaneously over the parallel branch trellis T . The iterative decoding algorithm is presented in Algorithm 6.
Algorithm 6 
(“knowing the whole structure”).
  • Initialization:
    1. 
    Initialize pmf P C 1 , j ( K 1 T 1 ) c = 1 2 and P C 2 , j ( | T 1 ) c = 1 2 for j = 1 , 2 , , N , c F 2 and P V 2 , k ( K 2 T 1 ) v = 1 2 for k = 1 , 2 , , N , v F 2 .
    2. 
    Set a maximum iteration number J and iteration variable j = 1 .
  • Repeat while j J :
    1. 
    Compute extrinsic messages P C 1 , j ( T 1 K 1 ) c for j = 1 , 2 , , N , c F 2 and P V 2 , k ( T 1 K 2 ) v for k = 1 , 2 , , N , v F 2 using BCJR algorithm over the parallel branch trellis T .
    2. 
    Compute extrinsic messages P U 1 , i ( K 1 | ) and P C 1 , j ( K 1 T 1 ) for i = 1 , 2 , , L 1 and j = 1 , 2 , , N using SPA.
    3. 
    Compute extrinsic messages P V 2 , k ( K 2 T 1 ) v for k = 1 , 2 , , N , v F 2 using SPA.
    4. 
    Make decisions according to (25) and (28).
    5. 
    Compute the syndrome S 1 = c ^ 1 · H 1 T . If S 1 = 0 , output u ^ 1 and c ^ 1 and exit the iteration.
    6. 
    Set j = j + 1 . If S 1 0 and j > J , report a decoding failure.
  • End Decoding

4.3. Numerical Results

In this subsection, simulation results of the decoding algorithms are shown and analyzed. Simulation parameters of Figure 8 and Figure 9 are presented in Table 1. In these two figures, we let the power constraints of two senders be same, that is, P 1 P 2 = P . Here, “Gaussian" stands for the scheme “knowing only the power of the interference”, “BPSK” stands for the scheme “knowing the signaling of the interference”, “BCJR1” stands for the scheme “BCJR-once”, “CONV” stands for the scheme “BCJR-repeat” and “Know All Structure” stands for the scheme “knowing the whole structure”. Figure 8 shows the error performance of Receiver 1. From Figure 8, we can see that, for Receiver 1, more details of the structure of the interference are known, better performance is obtained, that is, the decoding gains get larger. Similarly, Figure 9 gives the error performance of Receiver 2, where the scheme “knowing the whole structure” still has the best performance and the decoding gain can be achieved by taking into account the structure of the interference.
As we said before, another objective in this section is to find out a code rate pair nearest to the point “B” in Figure 4 with bit error rate (BER) performance of 10 - 4 . So we do the simulations with different code rate pairs. In the simulations, we adopt the scheme “knowing the whole structure” and gradually decrease the code rates from the point “B” with a step length 0 . 01 . Simulation parameters for different code rate pairs are listed in Table 2, while the simulation results are presented using a 3D graph in Figure 10. From the figure, it is obvious that as the code rates of two users are decreasing, the BER also decreases. Finally, we find that the “best” code rate pair is ( 0 . 71 , 0 . 48 ) for User 1 and User 2. The theoretical value of the point “B” is about ( 0 . 878 , 0 . 486 ) . So we can see that the gap between the result using our decoding scheme and the theoretical value is small.

5. Conclusions

The paper showed that the capacity region of the two-user general IC is the union of a family of rectangles. Each rectangle is defined by a pair of spectral inf-mutual information rates associated with two independent input processes. When the channel is stationary memoryless and the inputs are discrete Markov, we can calculate the defined pair of rates. We can also conclude that taking into account the structure of the interference processes can improve the simplest inner bounds (obtained by treating the interference as noise). Also, a concrete coding scheme to approach the theoretical achievable rate pairs was presented, which showed that the decoding gain can be achieved by considering the structure of the interference.

Acknowledgments

This paper was presented in part at the 2012 IEEE International Symposium Information Theory. This work was partially supported by the China NSFs (Nos. 61601131 and 61401177), the Guangdong NSFs (Nos. 2016A030313727, 2016A030308008 and 2014A030310183) and the Foundation for Distinguished Young Talents in Higher Education of Guangdong (No. 2015KQNCX086). This work was also supported by the Science and Technology Project of Guangzhou (No. 201510010193).

Author Contributions

Xiao Ma conceived the generalized scheme and contributed to the writing of manuscript; Lei Lin provided the proof and completed the paper writing; Chulong Liang performed the experiments and numerical analysis. Xiujie Huang and Baoming Bai helped analyze the data and contributed to the writing of manuscript. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carleial, A.B. Interference channels. IEEE Trans. Inf. Theory 1978, 24, 60–70. [Google Scholar] [CrossRef]
  2. Shannon, C.E. Two-way communication channels. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkely, CA, USA, 20 June–30 July 1960; Neyman, J., Ed.; University of California Press: Berkely, CA, USA, 1961; Volume 1, pp. 611–644. [Google Scholar]
  3. Ahlswede, R. The capacity region of a channel with two senders and two receivers. Ann. Probab. 1974, 2, 805–814. [Google Scholar] [CrossRef]
  4. Carleial, A.B. A case where interference does not reduce capacity. IEEE Trans. Inf. Theory 1975, 21, 569–570. [Google Scholar] [CrossRef]
  5. Han, T.S.; Kobayashi, K. A new achievable rate region for the interference channel. IEEE Trans. Inf. Theory 1981, 27, 49–60. [Google Scholar] [CrossRef]
  6. El Gamal, A.A.; Costa, M.H.M. The capacity region of a class of deterministic interference channels. IEEE Trans. Inf. Theory 1982, 28, 343–346. [Google Scholar] [CrossRef]
  7. Costa, M.H.M.; El Gamal, A.A. The capacity region of the discrete memoryless interference channel with strong interference. IEEE Trans. Inf. Theory 1987, 33, 710–711. [Google Scholar] [CrossRef]
  8. Kramer, G. Outer bounds on the capacity of Gaussian interference channels. IEEE Trans. Inf. Theory 2004, 50, 581–586. [Google Scholar] [CrossRef]
  9. Sato, H. Two-user communication channels. IEEE Trans. Inf. Theory 1977, 23, 295–304. [Google Scholar] [CrossRef]
  10. Carleial, A.B. Outer bounds on the capacity of interference channels. IEEE Trans. Inf. Theory 1983, 29, 602–606. [Google Scholar] [CrossRef]
  11. Sato, H. On degraded Gaussian two-user channels. IEEE Trans. Inf. Theory 1978, 24, 637–640. [Google Scholar] [CrossRef]
  12. Costa, M.H.M. On the Gaussian interference channel. IEEE Trans. Inf. Theory 1985, 31, 607–615. [Google Scholar] [CrossRef]
  13. Kramer, G. Review of rate regions for interference channels. In Proceedings of the 2006 IEEE International Zurich Seminar on Communications (IZS), Zurich, Switzerland, 22–24 February 2006; pp. 162–165. [Google Scholar]
  14. Chong, H.F.; Motani, M.; Garg, H.K.; Gamal, H.E. On the Han-Kobayashi region for the interference channel. IEEE Trans. Inf. Theory 2008, 54, 3188–3195. [Google Scholar] [CrossRef]
  15. Etkin, R.H.; Tse, D.N.C.; Wang, H. Gaussian interference channel capacity to within one bit. IEEE Trans. Inf. Theory 2008, 54, 5534–5562. [Google Scholar] [CrossRef]
  16. Huang, X.; Ma, X.; Lin, L.; Bai, B. Accessible Capacity of Secondary Users over the Gaussian Interference Channel. In Proceedings of the IEEE International Symposium on Information Theory, St. Petersburg, Russia, 31 July–5 August 2011; pp. 811–815. [Google Scholar]
  17. Huang, X.; Ma, X.; Lin, L.; Bai, B. Accessible Capacity of Secondary Users. IEEE Trans. Inf. Theory 2014, 60, 4722–4738. [Google Scholar] [CrossRef]
  18. Baccelli, F.; El Gamal, A.A.; Tse, D. Interference Networks with Point-to-Point Codes. IEEE Trans. Inf. Theory 2011, 57, 2582–2596. [Google Scholar] [CrossRef]
  19. Moshksar, K.; Ghasemi, A.; Khandani, A.K. An Alternative To Decoding Interference or Treating Interference As Gaussian Noise. In Proceedings of the IEEE International Symposium on Information Theory, St. Petersburg, Russia, 31 July–5 August 2011; pp. 1176–1180. [Google Scholar]
  20. Han, T.S.; Verdú, S. Approximation theory of output statistics. IEEE Trans. Inf. Theory 1993, 39, 752–772. [Google Scholar] [CrossRef]
  21. Han, T.S. Information-Spectrum Methods in Information Theory; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  22. Verdú, S.; Han, T.S. A general formula for channel capacity. IEEE Trans. Inf. Theory 1994, 40, 1147–1157. [Google Scholar] [CrossRef]
  23. Han, T.S. An information-spectrum approach to capacity theorems for the general multiple-access channel. IEEE Trans. Inf. Theory 1998, 44, 2773–2795. [Google Scholar]
  24. Lin, L.; Ma, X.; Huang, X.; Bai, B. An information spectrum approach to the capacity region of general interference channel. In Proceedings of the IEEE International Symposium on Information Theory, Cambridge, MA, USA, 1–6 July 2012; pp. 2261–2265. [Google Scholar]
  25. Etkin, R.H.; Merhav, N.; Ordentlich, E. Error Exponents of Optimum Decoding for the Interference Channel. IEEE Trans. Inf. Theory 2010, 56, 40–56. [Google Scholar] [CrossRef]
  26. Kavčić, A. On the capacity of Markov sources over noisy channels. In Proceedings of the IEEE 2001 Global Telecommunications Conference, San Antonio, TX, USA, 25–29 November 2001; Volume 5, pp. 2997–3001. [Google Scholar]
  27. Arnold, D.M.; Loeliger, H.A. On the information rate of binary-input channels with memory. In Proceedings of the IEEE International Conference on Communications, Helsinki, Finland, 11–14 June 2001; Volume 9, pp. 2692–2695. [Google Scholar]
  28. Pfister, H.D.; Soriaga, J.B.; Siegel, P.H. On the achievable information rates of finite state ISI channels. In Proceedings of the IEEE Global Telecommunications Conference, San Antonio, TX, USA, 25–29 November 2001; Volume 5, pp. 2992–2996. [Google Scholar]
  29. Bahl, L.R.; Cocke, J.; Jelinek, F.; Raviv, J. Optimal decoding of linear codes for minimizing symbol error rate. IEEE Trans. Inf. Theory 1974, 20, 284–287. [Google Scholar] [CrossRef]
  30. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 1991. [Google Scholar]
  31. Ma, X.; Zhao, S.; Zhang, K.; Bai, B. Kite codes over Groups. In Proceedings of the IEEE Information Theory Workshop, Paraty, Brazil, 16–20 October 2011; pp. 481–485. [Google Scholar]
  32. Zhang, K.; Ma, X.; Zhao, S.; Bai, B.; Zhang, X. A New Ensemble of Rate-Compatible LDPC Codes. In Proceedings of the IEEE International Symposium on Information Theory, Cambridge, MA, USA, 1–6 July 2012. [Google Scholar]
  33. Wiberg, N.; Loeliger, H.A.; Kötter, R. Codes and iterative decoding on general graphs. Eur. Trans. Commun. 1995, 6, 513–526. [Google Scholar]
  34. Kschischang, F.R.; Frey, B.J.; Loeliger, H.A. Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory 2001, 47, 498–519. [Google Scholar] [CrossRef]
  35. Forney, G.D., Jr. Codes on graphs: Normal realizations. IEEE Trans. Inf. Theory 2001, 47, 520–548. [Google Scholar] [CrossRef]
  36. Ma, X.; Zhang, K.; Chen, H.; Bai, B. Low Complexity X-EMS Algorithms for Nonbinary LDPC Codes. IEEE Trans. Commun. 2012, 60, 9–13. [Google Scholar] [CrossRef]
Figure 1. General interference channel W .
Figure 1. General interference channel W .
Entropy 19 00270 g001
Figure 2. Symmetric Gaussian interference channel.
Figure 2. Symmetric Gaussian interference channel.
Entropy 19 00270 g002
Figure 3. The trellis section of (CcBPSK, UnBPSK) with 32 branches. For each branch b, s - ( b ) and s + ( b ) are the left state and the right state, respectively; while the associated symbols x 1 ( b ) and x 2 ( b ) are the transmitted signals at Sender 1 and Sender 2, respectively.
Figure 3. The trellis section of (CcBPSK, UnBPSK) with 32 branches. For each branch b, s - ( b ) and s + ( b ) are the left state and the right state, respectively; while the associated symbols x 1 ( b ) and x 2 ( b ) are the transmitted signals at Sender 1 and Sender 2, respectively.
Entropy 19 00270 g003
Figure 4. The evaluated achievable rate pairs of a specific Gaussian interference channel (GIFC), where P 1 = P 2 = 7 . 0 dB and a = 0 . 5 . The rectangle OECH with legend “(UnBPSK, UnBPSK)” corresponds to the case when both senders use UnBPSK as inputs; the rectangle ODBG with legend “(UnBPSK, CcBPSK)” corresponds to the case when Sender 1 uses UnBPSK as input and Sender 2 uses CcBPSK as input; and the rectangle OFAI with legend “(CcBPSK, UnBPSK)” corresponds to the case when Sender 1 uses CcBPSK as input and Sender 2 uses UnBPSK as input.
Figure 4. The evaluated achievable rate pairs of a specific Gaussian interference channel (GIFC), where P 1 = P 2 = 7 . 0 dB and a = 0 . 5 . The rectangle OECH with legend “(UnBPSK, UnBPSK)” corresponds to the case when both senders use UnBPSK as inputs; the rectangle ODBG with legend “(UnBPSK, CcBPSK)” corresponds to the case when Sender 1 uses UnBPSK as input and Sender 2 uses CcBPSK as input; and the rectangle OFAI with legend “(CcBPSK, UnBPSK)” corresponds to the case when Sender 1 uses CcBPSK as input and Sender 2 uses UnBPSK as input.
Entropy 19 00270 g004
Figure 5. A coding scheme for the two-user GIFC.
Figure 5. A coding scheme for the two-user GIFC.
Entropy 19 00270 g005
Figure 6. A normal graph of a general (sub)system.
Figure 6. A normal graph of a general (sub)system.
Entropy 19 00270 g006
Figure 7. The normal graphs: (a) stands for the normal graph of “knowing only the power of the interference” and “knowing the signaling of the interference” for Decoder 1; (b) stands for the normal graph of “knowing the CC” and “knowing the whole structure” for Decoder 1.
Figure 7. The normal graphs: (a) stands for the normal graph of “knowing only the power of the interference” and “knowing the signaling of the interference” for Decoder 1; (b) stands for the normal graph of “knowing the CC” and “knowing the whole structure” for Decoder 1.
Entropy 19 00270 g007
Figure 8. The error performance of Receiver 1. “Gaussian” stands for the scheme “knowing only the power of the interference”, “BPSK” stands for the scheme “knowing the signaling of the interference”, “BCJR1” stands for the scheme “BCJR-once”, “CONV” stands for the scheme “BCJR-repeat” and “Know All Structure” stands for the scheme “knowing the whole structure”.
Figure 8. The error performance of Receiver 1. “Gaussian” stands for the scheme “knowing only the power of the interference”, “BPSK” stands for the scheme “knowing the signaling of the interference”, “BCJR1” stands for the scheme “BCJR-once”, “CONV” stands for the scheme “BCJR-repeat” and “Know All Structure” stands for the scheme “knowing the whole structure”.
Entropy 19 00270 g008
Figure 9. The error performance of Receiver 2. “Gaussian” stands for the scheme “knowing only the power of the interference”, “BPSK” stands for the scheme “knowing the signaling of the interference” and “Know All Structure” stands for the scheme “knowing the whole structure”.
Figure 9. The error performance of Receiver 2. “Gaussian” stands for the scheme “knowing only the power of the interference”, “BPSK” stands for the scheme “knowing the signaling of the interference” and “Know All Structure” stands for the scheme “knowing the whole structure”.
Entropy 19 00270 g009
Figure 10. Error performance of two users with different code rate pairs ( R 1 , R 2 ) . Blue plane represents BER level, green surface stands for the error performance of Receiver 1 and red surface stands for the error performance of Receiver 2.
Figure 10. Error performance of two users with different code rate pairs ( R 1 , R 2 ) . Blue plane represents BER level, green surface stands for the error performance of Receiver 1 and red surface stands for the error performance of Receiver 2.
Entropy 19 00270 g010
Table 1. Parameters of the bit error rate (BER) performance simulations.
Table 1. Parameters of the bit error rate (BER) performance simulations.
ParametersValues
Square of interference coefficient a0.5
Maximum iteration number J200
Kite Code of Sender 1 N = 10000 , L 1 = 8782
Kite Code of Sender 2 N = 5000 , L 2 = 4862
Generator matrix G ( D ) [ 1 + D + D 2 1 + D 2 ]
Code rate pair R 1 , R 2 0 . 8782 , 0 . 4862
Table 2. Parameters of the simulations for different code rate pairs.
Table 2. Parameters of the simulations for different code rate pairs.
ParametersValues
Square of interference coefficient a 0 . 5
Maximum iteration number J200
Code length N of Kite Code of Sender 110000
Code length N of Kite Code of Sender 25000
Generator matrix G ( D ) [ 1 + D + D 2 1 + D 2 ]
Step length100
Range of message length L 1 7100 8800
Range of message length L 2 4000 4900

Share and Cite

MDPI and ACS Style

Lin, L.; Ma, X.; Liang, C.; Huang, X.; Bai, B. An Information-Spectrum Approach to the Capacity Region of the Interference Channel. Entropy 2017, 19, 270. https://doi.org/10.3390/e19060270

AMA Style

Lin L, Ma X, Liang C, Huang X, Bai B. An Information-Spectrum Approach to the Capacity Region of the Interference Channel. Entropy. 2017; 19(6):270. https://doi.org/10.3390/e19060270

Chicago/Turabian Style

Lin, Lei, Xiao Ma, Chulong Liang, Xiujie Huang, and Baoming Bai. 2017. "An Information-Spectrum Approach to the Capacity Region of the Interference Channel" Entropy 19, no. 6: 270. https://doi.org/10.3390/e19060270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop