The secrecy capacity of the arbitrarily varying wiretap channel under list decoding

We consider a communication scenario in which the channel undergoes two different classes of attacks at the same time: a passive eavesdropper and an active jammer. This scenario is modelled by the concept of arbitrarily varying wiretap channels (AVWCs). In this paper, we derive a full characterization of the list secrecy capacity of the AVWC, showing that the list secrecy capacity is equivalent to the correlated random secrecy capacity if the list size L is greater than the order of symmetrizability of the AVC between the transmitter and the legitimate receiver. Otherwise, it is zero. Our result indicates that for a sufficiently large list size L, list codes can overcome the drawbacks of correlated and uncorrelated codes and provide a stable secrecy capacity for AVWCs. Furthermore, we investigate the effect of relaxing the reliability and secrecy constraints by allowing a non-vanishing error probability and information leakage on the list size L. We found that we can construct a list code whose rate is close to the correlated secrecy capacity using a finite list size L that only depends on the average error probability requested. Finally, we point out that our capacity characterization is an important step in investigating the analytical properties of the capacity function such as: the continuity behavior, Turing computability and super-activation of parallel AVWCs.


Introduction
Communication systems nowadays require both reliable and secure information transmission. There is a common belief that establishing a reliable communication is much easier than assuring the secrecy of the communication especially for wireless channels. Over the last few decades, classical cryptographic techniques have been a useful tool to provide secrecy, in particular against eavesdroppers with limited computational powers. However, with the rapid progress in the fields of digital design and number theory, the need for additional secrecy techniques has increased [15]. Information theoretic security is considered as one of the most promising candidates to complement current cryptographic based systems [13,14]. This is because it only depends on the physical characteristics of the channel and not on the capabilities of the eavesdroppers [5,18,24].
Information theoretic security was first studied by Wyner in [27], where he showed that secure communication can be established over a class of wiretap channel by exploiting the noise in the channel. Many researchers have extended Wyner's work by investigating different scenarios of the wiretap channel. However, most of these works considered a wiretap channel, where the channel state is perfectly known by the transmitter. In real life communication scenarios such as fading channels, it is very difficult to acquire perfect channel state information as the channel usually varies over time. Additionally, some wiretap channels suffer from the presence of active jammers who are capable of maliciously manipulating the channel state in each channel use [21].
In order to capture the previous scenarios, the model of the arbitrarily varying channel (AVC) [3,1,11] and its corresponding wiretap channel (AVWC) [21,2] were considered. An AVWC models a channel under two classes of attacks at the same time: an active jammer who maliciously manipulates the channel state and a passive eavesdropper [26] that listens to the transmitted signal. Our main task is to find a coding scheme that can overcome the different jamming strategies induced by the jammer, such that it establishes a reliable communication over the AVC between the transmitter and the legitimate receiver. Simultaneously, this coding scheme should assure the secrecy of the transmission against eavesdropping for all channel states.
Two main coding schemes have been used to investigate reliable communication over AVCs in [3,1,11]: uncorrelated and correlated random codes. It was shown in [3] that the availability of a correlated randomness between the transmitter and the receiver is crucial for the establishment of a reliable communication over AVCs. In fact, AVCs exhibit a dichotomy [1]: Their uncorrelated capacity either equals their correlated random capacity or it equals zero. This observation became clearer in [11], after showing that uncorrelated codes fail to establish a reliable communication over symmetrizable AVCs, while correlated random codes can overcome the symmetrizablility problem. Beside correlated and uncorrelated codes, list codes have been a very useful tool for reliable communication over AVCs. It was shown in [4,16] that if the list size L is greater than the order of symmetrizability of the AVC then the list capacity is equivalent to the correlated random capacity, otherwise it is zero.
Similar to the non-secrecy case, uncorrelated and correlated random codes were the main coding scheme used to investigate secure communication over AVWCs [2,26,23]. It was shown therein that AVWCs exhibit the same dichotomy of the AVCs: Their uncorrelated secrecy capacity either equals their correlated random secrecy capacity or it vanishes. The results established in [26,23] have some indications that constructing a coding scheme that can provide a reliable communication in the presence of an active jammer is more challenging than protecting this communication against passive eavesdropping. This is because jamming has a huge impact on the communication link as it can induce a symmetrizable channel making reliable communication impossible. On the contrary, one can always confuse the eavesdropper by employing wiretap coding with the appropriate amount of randomization resources according to the worst channel realization to the eavesdropper. This observation motivates us to investigate the usage of list codes for secure communication over AVWCs. To the best of our knowledge, this investigation has never been considered in previous literature.
In this paper, we present a full characterization of the list secrecy capacity of an AVWC. We show that the list secrecy capacity is equivalent to the correlated random secrecy capacity if the list size L is greater than the order of symmetrizability of the AVC between the transmitter and the legitimate receiver, otherwise it is zero. This characterization indicates that list codes can provide a solution for the instability of the secrecy capacity of AVWCs under uncorrelated codes. This is because, if we choose the list size L to be greater than the order of symmetrizability of the legitimate AVC, we can always achieve secrecy rates up to the correlated random secrecy capacity. However, it was shown in [4] that even for simple AVCs, the order of symmetrizability can be arbitrary large. This result motivates us to investigate another problem: For an AVWC, where we relax the reliability and secrecy constraints by allowing a non-vanishing error probability and information leakage, what is the maximum secrecy rate that we can achieve using list codes with a finite list size L that does not depend on the order of symmetrizability. For such scenario, we show that we can construct list codes with rate up to the correlated random secrecy capacity, where the list size L only depends on the average error probability requested.
The rest of the paper is organized as follows: In Section 2, we describe the model of the AVWC and introduce the principle of correlated and uncorrelated codes along as the principle of list codes. In Section 3, we present our coding theorem that establishes a full characterization of the list secrecy capacity and gives a detailed proof for this theorem using two different coding schemes. In Section 4, we investigate the effect of relaxing the reliability and secrecy constraints on the list size L. Finally, we conclude the paper in Section 5.
Notation. In this paper, random variables are denoted by capital letters and their realizations by the corresponding lower case letters, while calligraphic letters are used to denote sets. Additionally, we use fraktur letters to denote a set of sets. For a natural number n ∈ N, we use X n to denote the sequence of random variables (X 1 , . . . , X n ), where X i is the i th variable in the sequence. A probability distribution for the random variable X is denoted by P X , where P X (x) denotes the probability of the event: P[X = x]. The set of all probability distributions on X is denoted by P(X ), while we use P n 0 (X ) to denote the set of types for all sequences x n ∈ X n . For each distribution P X ∈ P n 0 (X ), we define the typical set T PX = {x n :N (·|x n ) = P X (·)}, whereN (x|x n ) denotes the number of times the element x occurs in the sequence x n . For the conditional distributions P A|B ∈ P(A|B) and P B|C ∈ P(B|C), we define P A|C = P A|B • P B|C ∈ P(A|C) as follows: P A|C (a|c) = 2. System model 2.1. Arbitrarily varying wiretap channels. Consider a communication scenario in which the channel undergoes two classes of attacks at the same time. The first attack is carried out by an active jammer that threatens the reliability of the communication by maliciously manipulating the channel state for each transmission. Simultaneously, a passive eavesdropper tries to extract any information about the transmitted message and consequently threatening the secrecy of the transmission. This communication scenario is shown in Fig. 1. This scenario is perfectly captured by the model of the AVWC as follows: Let S be a finite state set, while X , Y and Z represent finite input and output sets. For every state s ∈ S, the channel between the transmitter and the legitimate receiver is given by a stochastic matrix W s : X → P(Y), while the channel between the transmitter and the eavesdropper is given by a stochastic matrix V s : X → P(Z). Thus, for a state sequence s n ∈ S n of length n produced by the active jammer, an input sequence x n ∈ X n produced by the transmitter, an output sequence y n ∈ Y n observed by the legitimate receiver and an output sequence z n ∈ Z n observed by the passive eavesdropper, we define the discrete memoryless channels An AVWC is given by the collection of all channels in (1) for all possible state sequences s n as follows: Definition 2.1. The discrete memoryless arbitrarily varying wiretap channel (AVWC) is denoted by the pair (W, V) and is given by the family of pairs with common input as (W, V) = {(W n s n , V n s n ) : s n ∈ S n , n = 1, 2, . . . }, where W represents the AVC between the transmitter and the legitimate receiver, while V represents the AVC between the transmitter and the eavesdropper.
Since the channel is memoryless, the behavior of the channel should depend on the number of times each channel state s is imposed, and not on the order of these states. This observation motivates the introduction of the average channel. For any probability distribution q ∈ P(S), we define In some literature, the average channel is defined over a block length n, such that for any probability distributionq ∈ P(S n ), we have Secure communication over an AVWC faces two challenges. The first one is due to the existence of the active jammer that aims to disturb any reliable communication over the AVC between the transmitter and legitimate receiver. The jammer carries out this role by selecting a certain jamming strategy, i.e. a channel state sequence s n ∈ S n . In order to have a better understanding for the role of the jammer, we need to highlight the concept of a symmetrizable AVC introduced in [11].
holds for every x,x ∈ X and y ∈ Y.
The condition in (4) implies that for a symmetrizable AVC, the jammer can select a certain jamming strategy which emulate a valid channel input. An example for a symmetrizable AVC was given in [23] as follows: Consider an AVC, where 1], our AVC W is defined by the following two stochastic matrices: One can easily show that for every ∈ [0, 0.5], there exists an auxiliary channel σ : X → P(S), such that the symmetrizability condition in (4) is satisfied. This is because the condition in (4) implies the following system of equations: This system of linear equations leads to the following auxiliary channel σ: σ(1|1) /(1 − ) and σ(1|2) (1 − 2 )/(1 − ). This implies that for this class of AVC, there exists a jamming strategy that makes it impossible for the receiver to differentiate between the channel input x and the channel state s. A generalization for the concept of symmetrizable AVCs was given in [4,16] as follows: 3. An AVC W is L-symmetrizable, if there exists an auxiliary channel σ : X L → P(S), such that for every permutation π of the sequence (1, . . . , L + 1) holds for every x L+1 ∈ X L+1 and y ∈ Y.
Similar to the implication of the symmetrizability condition in (4), the condition in (5) implies that an L-symmetrizable AVC can emulate L valid replicas of the channel input. This implies that for a given observation at the receiver y n , there exists L + 1 input sequences that might have been sent. For a given AVC W, the largest L for which this AVC is L-symmetrizable is called the order of symmetrizability and is denoted by L(W). An example for L-symmetrizable AVC was given in [4,Theorem 3]. Remark 1. One can easily see that the L-symmetrizability condition in (5) simplifies to the symmetrizability condition in (4) for L = 1. Eq. (5) also implies that any L-symmetrizable channel is also an L -symmetrizable one, for all 0 ≤ L ≤ L.
From the previous discussion, one can see how establishing a reliable communication over the AVC between the transmitter and legitimate receiver is not an easy task specially under the following assumption: We assume that the transmitter and the legitimate receiver know the state space S, but have no knowledge about the actual state sequence s n selected by the jammer. We also assume that there is no a priori distribution that guides the production of the state sequence s n .
The second challenge that faces secure communication over AVWCs is the existence of a passive eavesdropper that tries to extract any information about the transmitted message. Our target is to construct a coding scheme that can overcome the different jamming strategies induced by the jammer and consequently establishing a reliable communication between the transmitter and the legitimate receiver for all state sequences s n ∈ S n . At the same time our coding scheme should be able to keep the eavesdropper completely ignorant about the information transmitted. One of the main factors that we need to consider during the construction of such coding scheme is what information is allowed to be shared between the eavesdropper and the jammer.

Code concepts.
Two main coding techniques have been used to establish a reliable and secure communication over AVWCs. The first technique is based on uncorrelated codes, in which a predefined (encoder, decoder) pair is used through out the whole transmission. The second technique is known as correlated random codes, in which an (encoder, decoder) pair is selected based on some sort of common randomness shared between the transmitter and the legitimate receiver.
Definition 2.4. An uncorrelated code C s uc for the AVWC (W, V) consists of: a set of confidential messages M c , a stochastic encoder (6) E : M c → P(X n ) that maps a confidential message m c ∈ M c to a codeword x n (m c ) ∈ X n according to the conditional probability E(x n |m c ), and a deterministic decoder that maps each channel observation at the legitimate receiver to the corresponding intended message or an error message. Now, in order to measure the reliability performance of an uncorrelated code C s uc , we use the average probability of decoding error given by: = max s n ∈S n 1 |M c | mc x n y n :ϕ(y n ) =mc W n s n (y n |x n )E(x n |m c ).
On the other hand, the secrecy performance of C s uc is measured by investigating the information leakage of the confidential message to the eavesdropper for every state sequence s n ∈ S n with respect to the strong secrecy criterion as follows: L(C s uc ) = max s n ∈S n L(s n |C s uc ) = max s n ∈S n I(M c ; Z n s n |C s uc ), where M c represents a uniformly distributed random variable over the confidential messages set M c , while Z n s n is a random variable for the channel observation at the eavesdropper for state sequence s n . Definition 2.5. A non-negative number R s uc is an achievable secrecy rate for the AVWC (W, V) if for all δ, λ, τ > 0 there is an n(δ, λ, τ ) ∈ N, such that for all n > n(τ, λ, δ), there exists a sequence of uncorrelated codes (C s uc ) n that satisfies the following constraints: The uncorrelated secrecy capacity C s uc (W, V) is given by the supremum of all achievable secrecy rates R s uc . In [23], the uncorrelated secrecy capacity was established using a coding scheme that does not enforce any restrictions on the information shared between the eavesdropper and the jammer. In particular, the eavesdropper is allowed to know the jamming strategy used to manipulate the channel state or even the exact channel state selected by the jammer. At the same time, the jammer is allowed to know any information possessed by the eavesdropper. Nevertheless, it has been shown in [2,11,23] that uncorrelated codes can not establish reliable communication over AVWC, if the AVC between the transmitter and the legitimate receiver is symmetrizable. Unfortunately, a lot of channels of practical relevance fall into the class of symmetrizable AVCs [17]. Definition 2.6. A correlated random code C s cr for the AVWC (W, V) consists of: a set of confidential messages M c , a set G of values of the common randomness, a probability distribution P Γ that guides the selection of a certain realization of the common randomness γ ∈ G, a set of stochastic encoders (13) E γ : M c → P(X n ), that maps a confidential message m c ∈ M c to a codeword x n (m c ) ∈ X n according to the conditional probability E γ (x n |m c ) and a set of deterministic decoders that maps each channel observation at the legitimate receiver to the corresponding intended message or an error message.
The previous definition implies that any correlated random code C s cr can be interpreted as a family of uncorrelated codes of Definition 2.4 as follows: (15) C s cr = {C s uc (γ) : γ ∈ G}. Similar to uncorrelated codes, we use the average probability of decoding error to evaluate the reliability performance of a correlated random code C s cr as follows: e(C s cr ) = max s n ∈S nē (s n |C s cr ) = max s n ∈S n 1 |M c | mc γ x n y n :ϕ γ (y n ) =mc W n s n (y n |x n )E γ (x n |m c )P Γ (γ).
On the other hand, the secrecy performance of C s cr is evaluated with respect to two secrecy criteria. The first criterion is called the mean secrecy criterion and is measured by the investigating the average information leakage of the confidential message to the eavesdropper with respect to the strong secrecy criterion, for every state sequence s n ∈ S n over all realizations of the shared common randomness γ ∈ G as follows: where Z n s n ,γ is the channel output at the eavesdropper for a state sequence s n , when the encoder E γ is used. The second criterion is more conservative criterion known as the maximum secrecy criterion and is given by It is important to highlight that both secrecy criteria assume that the eavesdropper has an access to the common randomness shared between the transmitter and the legitimate receiver. However, the mean secrecy criterion puts the distribution P Γ into consideration, while the maximum secrecy criterion considers the information leakage for every realization γ. It is also fair to assume that the eavesdropper has access to the shared common randomness because otherwise the transmitter and the legitimate receiver can use this common randomness to implement alternative secure encoding schemes such as one time pad, where the common randomness is used as a shared secret key.
Definition 2.7. A non-negative number R s,mean cr is an achievable mean secrecy rate for the AVWC (W, V) if for all δ, λ, τ > 0 there is an n(δ, λ, τ ) ∈ N, such that for all n > n(τ, λ, δ), there exists a sequence of correlated random codes (C s cr ) n that satisfies the following constraints: The correlated random mean secrecy capacity C s,mean cr (W, V) is given by the supremum of all achievable mean secrecy rates R s,mean cr . Definition 2.8. A non-negative number R s,max cr is an achievable max secrecy rate for the AVWC (W, V) if for all δ, λ, τ > 0 there is an n(δ, λ, τ ) ∈ N, such that for all n > n(τ, λ, δ), there exists a sequence of correlated random codes (C s cr ) n that satisfies the constraints in (19) and (20), in addition to: The correlated random max secrecy capacity C s,max cr (W, V) is given by the supremum of all achievable max secrecy rates R s,max cr .

Remark 2.
Although it is immediately clear that C s,mean it was shown in [26,Theorem 6] that both secrecy capacities are equivalent. Thus, throughout the rest of the paper we will use C s cr (W, V) to denote the correlated random secrecy capacity of the AVWC (W, V).
Although it was shown in [1,2] that correlated random codes can establish a reliable communication over symmetrizable AVWCs, the usage of correlated random codes in most practical scenarios is still questionable. This because as highlighted by Definition 2.6, correlated random codes require some sort of a pre-established coordination between the transmitter and the legitimate receiver which might not always be available. In addition, it was shown that the amount of the coordination resources needed |G| grows with the code block length n [2,26].
The previous results were derived under the assumption of limited communication between the eavesdropper and the jammer. In particular, the jammer is allowed to inform the eavesdropper about the jamming strategy used or even the exact channel state selected, but the eavesdropper is not allowed to inform the jammer about the exact realization of the common randomness. This is because the reliability analysis provided in [2] is only valid, if the jammer chooses s n ∈ S n independently from the confidential message m c and the selected encoder E γ .

List codes.
List codes are a special class of uncorrelated codes, in which the decoder outputs a list of L possible messages, instead of deciding on exactly one message. Such codes have been used for the AVC without secrecy constraints to overcome the problem of uncorrelated codes with symmetrizability and the need of correlated random codes to a shared common randomness [4,16,22,6]. We start by presenting the following definition. The reliability performance of C list is measured in terms of its average probability of error given by: Definition 2.10. A non-negative number R p is an achievable public list rate for the AVC W if for all δ, λ > 0 there is an n(τ, λ) ∈ N, such that for all n > n(τ, λ), there exists a sequence of list codes (C list ) n , such that the following holds: The list capacity C list (W, L) is given by the supremum of all achievable rates R p . Now, we extend the concept of list decoding to fit the model of secure communication over an AVWC as follows: Definition 2.11. A confidential list code C s list with list size L for the AVWC (W, V) consists of: a set of confidential messages M c , a stochastic encoder (26) E : M c → P(X n ) that maps a confidential message m c ∈ M c to a codeword x n ∈ X n according to the conditional probability E(x n |m c ), and a deterministic list decoder with list size that maps a channel observation at the legitimate receiver into a list of up to L messages or an error message, where J L (M c ) is the set of all subsets of M c with cardinality at most L.
Similar to the non-secrecy case, we use the average error probability to measure the reliability performance of the list code C s list as follows: One the other hand, we use the information leakage of the confidential message M c to the eavesdropper with respect to the strong secrecy criterion to investigate the secrecy performance of the list code C s list as follows: Definition 2.12. A non-negative number R s list is an achievable secrecy rate for the AVWC (W, V), if for all δ, λ, τ > 0 there is an n(δ, λ, τ ) ∈ N, such that for all n > n(τ, λ, δ), there exists a sequence of list codes (C s list ) n with list size L that satisfies the following constraints: The list secrecy capacity C s list (W, V, L) is given by the supremum of all achievable secrecy rates R s list . We now introduce an additional coding scheme that can be used to transmit public and confidential messages at the same time. This scheme turns out to be very useful in some of the investigated scenarios of AVWCs. Definition 2.13. A public-confidential list code C ps list with size L for the AVWC (W, V) consists of: a set of public messages M p , a set of confidential messages M c , a stochastic encoder that maps a public message m p ∈ M p and a confidential message m c ∈ M c into a codeword x n (m p , m c ) ∈ X n according to the conditional probability E(x n |m p , m c ) and a deterministic list decoder given by that maps a channel observation at the legitimate receiver into a list of up to L public and confidential message pairs or an error message, where J L (M p × M c ) is the set of all subsets of (M p × M c ) with cardinality at most L.
The reliability performance of C ps list is measured in terms of its average probability of error given by: On the other hand, the secrecy performance of C ps list is measured with respect to two secrecy criteria. The first criterion is called the mean secrecy criterion and is given by: where Z n s n ,mp is the channel output at the eavesdropper for the state sequence s n when the public message m p is transmitted. L mean (C ps list ) measures the information leakage of the confidential message M c to the eavesdropper averaged over the public message M p . The second criterion investigates the information leakage of the confidential message M c to the eavesdropper at different values of the public message m p ∈ M p , then uses the maximum leakage as a measure for the secrecy performance of C ps list . That is why it is called the maximum secrecy criterion and is given by The previous two secrecy criteria given in (33) and (34) are similar to the two secrecy criterion used to investigate the secrecy performance of a correlated random code given in (17) and (18) respectively. Definition 2.14. A rate pair (R p , R mean c ) ∈ R 2 + is an achievable public-confidential rate for the AVWC (W, V) with list size L with respect to the mean secrecy criterion, if for all τ, σ, λ, δ > 0, there is an n(τ, σ, λ, δ) ∈ N, such that for all n > n(τ, σ, λ, δ), there exists a sequence of list codes (C ps list ) n , that satisfies the following: The public-confidential list mean secrecy capacity C ps,mean list (W, V, L) is given by the supremum of all achievable mean confidential rates R mean c . Definition 2.15. A rate pair (R p , R max c ) ∈ R 2 + is an achievable public-confidential rate for the AVWC (W, V) with list size L with respect to the maximum secrecy criterion, if for all τ, σ, λ, δ > 0, there is an n(τ, σ, λ, δ) ∈ N, such that for all n > n(τ, σ, λ, δ), there exists a sequence of list codes (C ps list ) n , that satisfies (35), (36) and (37), in addition to The public-confidential list maximum secrecy capacity C ps,max list (W, V, L) is given by the supremum of all achievable maximum confidential rates R max c . Remark 3. 1) Although one can easily notice that C ps,mean , we will show that the two capacities are equivalent. This result is similar to the one established in [26,Theorem 6] for the mean and maximum correlated random secrecy capacity. Thus, throughout the rest of the paper, we will use the notation C ps list (W, V, L) to denote both the mean and maximum list secrecy capacity. 2) In this paper, we will limit our investigation to the case where the public rate R p vanishes as n → ∞. For this case, it follows immediately that C ps list (W, V, L) and C s list (W, V, L) are equivalent. 3) Since we limit our investigation to the scenario where lim n→∞ R p = 0, one might wonder about the role played by the public message m p . In general, the public message m p can be used to perform some services like synchronization and channel estimation. Such services are very important for any communication in practice, yet they do not have much impact on the rate.

List secrecy capacity of the AVWC
In this section, we present a full characterization of the list secrecy capacity for AVWCs C s list (W, V, L) using two different coding scheme. We start by stating the main tools needed to derive our results. We then state our first coding theorem and present a detailed proof for it, in which we use a list code of Definition 2.11. We then turn to our second coding theorem, where we use a public-confidential list code of Definition 2.13, then limit our investigation to the case where the public rate vanishes. Finally, we discuss and compare the two coding schemes, showing that they both lead to the same list secrecy capacity.
3.1. Important tools. Before we present our main contribution, we highlight some of the previously established results for AVCs and AVWCs. We start by presenting a lemma that shows that reliable communication can not be established over an L-symmetrizable AVC using list codes with list size L as follows: This lemma generalizes the result established in [11,Theorem 1] for uncorrelated codes which can be interpreted as list codes with list size L = 1. The main idea of the proof is to show that if the AVC W is L-symmetrizable, then for any list code C list with list size L, there exists a state sequence s n ∈ S n such that the average error probability is bounded away from zero, as n approaches infinity. In particular, we have as long as the list size L is finite. This implies that no list code with list size L exists, such that the average decoding error probability vanishes as n → ∞, which consequently means a zero capacity. Next, we present the main tools that guides the encoding and decoding process for a list code C s list of Definition 2.11. These tools help us to establish a reliable communication over an AVC W with an order of symmetrizability L(W), using list codes with list size L ≥ L(W)+1. We start by the encoding process and notice that C s list requires a stochastic encoder as shown in (26). We transform this stochastic encoder to a deterministic one using a randomization message set M r as follows: We now present a lemma that guides the production of the codewords x n (m c , m r ) ∈ X n that are produced by the previous encoder to construct the codebook of the list code C s list . Lemma 3.2. [16, Lemma 1] For any L ≥ 1, > 0 and β > 0, there exists an n 0 ( , L), such that for all n ≥ n 0 ( , L), message sets M c , M r satisfying |M c ||M r | ≥ L · 2 n and type P X ∈ P n 0 (X ) satisfying min x:PX(x)>0 P X (x) ≥ β, there exist codewords x n (m c , m r ) ∈ T PX ⊂ X n , for m c ∈ M c and m r ∈ M r , such that upon setting R = 1 n log |Mc||Mr| L , we have for all x n ∈ T PX , s n ∈ S n and the joint type where J = {j 1 , . . . , j L } ∈ J L (M c × M r ) and x n J denotes the ordered L-tuple (x n (j 1 ), . . . , x n (j L )).
The previous lemma can be viewed as a generalization of the result established in [11,Lemma 3] for uncorrelated codes, i.e. L = 1. In order to understand the role played by Lemma 3.2 in the encoding process of C s list , we need to analyze the statement of the lemma as follows: Assumptions. Lemma 3.2 starts by defining a list size L ≥ 1 that will be used during the decoding process. Then, for some constant > 0, it defines the code block length n to be sufficiently large, i.e. n ≥ n 0 ( , L). Next, Lemma 3.2 addresses the messages needed to be transmitted from the sets M c and M r , where a constraint on the product of the cardinality of the message sets must be fulfilled. Finally, Lemma 3.2 defines a probability distribution P X on X , such that P X belongs to the set of types on X n and the minimum non-zero probability of an element x should be greater than or equal β > 0.
Results. For any sequence x n ∈ T PX , a given sequence s n ∈ S n , a fixed type P XX L S and under the previous assumptions, Lemma 3.2 claims that using the probability distribution P X , we can generate a random codebook {x n (m c , m r ) ∈ T PX : (m c , m r ) ∈ M c × M r } ⊂ X n such that the conditions in (41)  It was shown in [16] that any randomly generated set of codewords will satisfy the conditions in (41) -(45) of Lemma 3.2 with probability that approaches one exponentially fast as n → ∞. Beside Lemma 3.2, we need to define a decoding algorithm for the list code C s list . Thus, we present the following definition: 1) There exists an s n ∈ S n such that where P YXS is the joint type of (y n , x n (m c , m r ), s n ), P X is the type of x n (m c , m r ), while P S is the type of s n . Additionally, we have 2) For every choice of L other distinct codewords (x n (j 1 ), . . . , x n (j L )), where j i ∈ M c × M r , such that each of them satisfies for some s n i ∈ S n , where P YXj i Si is the joint type of (y n , x n (j i ), s n i ), P Xj i is the type of x n (j i ), while P Si is the type of s n i , we have where the previous mutual information is calculated with respect to the joint distribution P XYX L S given by the joint type of (x n (m c , m r ), y n , x n (j 1 ), . . . , x n (j L ), s n ).
The previous list decoder is a generalization of the decoder introduced in [11, Definition 3] for uncorrelated codes. Lemma 3.2 and Definition 3.3 are the main pillars needed to realize the encoding and decoding process of a list code C s list given by Definition 2.11. Now, we are ready to present the reliable communication results. We present a lemma that implies that if the AVC W is not L-symmetrizable, then the list decoder in Definition 3.3 can not produce more than L values as long as the rate is bounded by the minimum mutual information between the channel input X and the average channel output Y q .
Lemma 3.4. [16, Lemma 3] For an AVC W with an order of symmetrizability L(W), let L ≥ L(W) + 1 and β > 0. Then for any δ > 0, n ≥ n 0 (β, δ) and any type P X ∈ P n 0 (X ) such that min x:PX(x)>0 P X (x) ≥ β, there exists a list code C s list with list size L such that for ζ > 0, we have min q∈P(S) This lemma was proved in [16] by using a list code C s list given by the set of codewords x n (m c , m r ), for m c ∈ M c and m r ∈ M r that satisfy the constraints in Lemma 3.2 along with the list decoder in Definition 3.3.
Remark 4. It worth mentioning that Lemmas 3.2 and 3.4 in addition to Definition 3.3 were originally presented using one message set M. However, in this paper, we present them using two message sets M c and M r so that they can fit our coding scheme. This modification does not affect the validity of these results because it makes no difference to enumerate the codewords by one index taken from a message set M or by two indices taken from M c × M r .
The final result that we need to highlight is related to secure communication over AVWCs. It gives a lower bound on the amount of randomization needed to confuse the eavesdropper.

Lemma 3.5. [23, Lemma 2]
For any τ, β > 0, there exists values δ(τ ) > 0 and n 0 (τ ), such that for all n ≥ n 0 (τ ) and type P X ∈ P n 0 (X ) satisfying min x:PX(x)>0 P X (x) ≥ β, there exist codewords x n (m c , m r ) ∈ T X ⊂ X n , where m c ∈ M c and m r ∈ M r , such that for an AVC V, we have for all s n ∈ S n and m c ∈ M c if log |M r | n ≥ max q∈P(S) I(X; Z q ) + τ , then where E[·] is the expectation, X n is distributed according to P(X n = x n ) := 1 |TX| 1 TX (x n ) and lim τ →0 δ(τ ) = 0.
Remark 5. Using the triangle inequality along with the relation between the total variation distance and the mutual information established in [12,Lemma 2.7], it was shown in [23], that the condition in (47) assures that the information leakage of the confidential message to the eavesdropper with respect to the strong secrecy decays exponentially as follows:

First coding theorem.
In this section, we present the main contribution of this paper, which is a full characterization of the list secrecy capacity of AVWCs. We start by defining the following multi-letter formula. Let (W, V) be an AVWC, then for every n ∈ N, let U n := [|X | n ] and define C * (W, V) as follows: max P X n |Un ∈P(X n |Un) min q∈P(S n ) I(U n ; Y n q ) − max s n ∈S n I(U n ; Z n s n ) .
It was shown in [26,Theorem 6], that the correlated random secrecy capacity C s cr (W, V) is equivalent to C * (W, V). It was also shown that C * (W, V) does not change if the sets over which the minimum and the maximum are taken in (48) are replaced by different, but related ones. In particular, it was shown that max s n ∈S n I(U n ; Z n s n ) can be replaced by maxq ∈P(S n ) I(U n ; Z ñ q ). It was also shown that min q∈P(S n ) I(U n ; Y n q ) can be replaced by minq ∈P(S) I(U n ; Y ñ q ).
Theorem 3.6. The list secrecy capacity C s list (W, V, L) of the AVWC (W, V) is characterized by the following: where W is the AVC between the transmitter and the legitimate receiver.
The previous capacity characterization implies that for an AVWC (W, V), with an order of symmetrizability L(W) for the legitimate channel, a list code with list size L ≥ L(W) + 1 can provide a reliable and secure communication at the rate given by (48) which is equivalent to the correlated random secrecy capacity. Theorem 3.6 also implies that for an AVWC (W, V) where W is symmmetrizable and no common randomness is shared between the transmitter and the legitimate receiver, i.e. C s uc (W, V) = 0 and correlated random codes can not be used, we can still achieve a positive secrecy rate equivalent to the correlated random secrecy capacity using list decoding. This point implies that list codes can overcome the drawbacks of both uncorrelated and correlated random codes.
We now present a detailed proof for Theorem 3.6 based on the code concept of Definition 2.11. Our achievability proof is based on a coding scheme that combines the list decoding techniques used in [16] to establish a reliable communication over AVCs and the strong secrecy techniques introduced in [26,23] for AVWCs. On the other hand, the converse follows using the standard techniques as in [26].
Coding Problem. The main issue of the proof of Theorem 3.6 is the achievability of the secrecy rate in (48) if the AVC W between the transmitter and the legitimate receiver is not L-symmetrizable. This is because, the expression of C * (W, V) in (48) is calculated with respect to the AVCW instead of the AVC W, whereW = (W • P X n |Un ) is the AVC that arises from combining the prefix channel P X n |Un ∈ P(X n |U n ) and the AVC W. This point is very important because it was shown in [23, Example 1] that a prefix channel can change a non-symmetrizable AVC to a symmetrizable one. This implies that, although W is not L-symmetrizable,W might be L-symmetrizable.
The previous observation raises an important issue for our coding scheme because the achievability list decoding techniques introduced in [16] are only valid for AVCs that are not L-symmetrizable. In order to solve this issue, we used an approach similar to the one introduced in [23]. We start by restricting the calculation of C * (W, V) to certain prefix channels of the formP X n |Un = P Id X|X ⊗P X n−1 |Un−1 . We then show that using this restricted class of prefix channels to calculate C * (W, V) is asymptoticly as good as using the full set of prefix channels P(X n |U n ). In addition, this restricted class of prefix channels assures that the AVC (W •P X n |Un ) is not L-symmetrizable as long as W is not L-symmetrizable.
Proof. We start by proving the first condition of Theorem 3.6. According to Lemma 3.1, if the AVC W between the transmitter and the legitimate receiver is L-symmetri-zable, then no reliable communication is possible using a list code with list size L. This implies that the list secrecy capacity C s list (W, V, L) = 0, proving the first part of the theorem.
We now turn to the second condition of the theorem and assume that the AVC W is not L-symmetrizable and present an achievable coding scheme that consists of 4 main steps: 1) Prefix Channel Optimization: We start by considering the input distributions P Un and the conditional distributions P X n |Un arising from the optimization problem in (48). Without loss of generality, for every r ∈ N, let U r = X r and define the following: where W r q (y r |x r ) = s r q(s r )W r s r (y r |x r ). Then, for an arbitrary but fixed r ∈ N and an arbitrary ≥ 0, let P * Ur ∈ P(U r ) and P * X r |Ur ∈ P(X r |U r ) be such that (50) Now, letP Ur+1 ∈ P(U r+1 ), be such thatP Ur+1 := P * Ur ⊗π, where π ∈ P(X ) is defined as π(x) := |X | −1 andP Xr+1|Ur+1 ∈ P(X r+1 |U r+1 ), be such thatP Xr+1|Ur+1 := P * X r |Ur ⊗ σ, where σ ∈ P(X |X ) is defined as σ(x|x) = 1 if and only if x =x. Then, from (49), it holds that where (a) follows from the definition ofP Ur+1 andP Xr+1|Ur+1 ; (b) follows from the mutual information chain rule and the fact that U * r and Z sr+1 are independent as well as X π and Z r s r ; while (c) follows because I(X π ; Z sr+1 ) ≤ log |X |. We now investigate the effect of the prefix channelP Xr|Ur on the symmetrizability of the resultant channel (W ⊗r •P Xr|Ur ) as follows: where the previous relation follows due to the properties of the operators • and ⊗ along with the definition of the conditional distribution σ. Since the AVC W is not L-symmetrizable, then for every r ≥ 2, it follows that the AVC (W ⊗r •P Xr|Ur ) is not L-symmetrizable as well even if (W ⊗r−1 • P * Xr−1|Ur−1 ) is L-symmetrizable. 2) Block Coding: In this point, we describe the construction of a list code C s list according to Definition 2.11 for the AVWC (W ⊗r •P Xr|Ur , V ⊗r •P Xr|Ur ). We start by generating our codebook as follows: For a code block length t ∈ N and r ∈ N\{1}, let P Ur ∈ P t 0 (U r ) and use it to generate the set of codewords u t r (m c , m r ) ∈ T PU r ⊂ U t r , for m c ∈ M c and m r ∈ M r . Using the union bound, we can show that the produced codewords satisfy the constraints in Lemma 3.2 and Lemma 3.5 with probability approaches one as t → ∞.
Next, we present our encoding and decoding algorithms. Given a confidential message m c ∈ M c , the encoder chooses a randomization message m r ∈ M r uniformly at random then outputs the codeword u t r (m c , m r ). At the legitimate receiver, we use a list decoder ϕ L similar to the one given by Definition 3.3, that takes the received sequence y n and outputs a list J ∈ J L (M c × M r ).
3) Reliability and secrecy analysis: Lemma 3.4 implies that for an AVC which is not L-symmetrizable, there exists a list code C s list with list size L and block length t constructed as described in the previous point which can be used to transmit the messages (m c , m r ) reliably over the AVC (W ⊗r •P Xr|Ur ) as long as min q∈P(S r ) where δ > 0. The previous result is only valid if the AVC (W ⊗r •P Xr|Ur ) is not L-symmetrizable. In addition, we mean by reliably transmitted message that the average decoding error of C s list decays exponentially fast. Thus, for ζ > 0 we have On the other hand, based on the secrecy analysis in [23] that relates the strong secrecy constraint in (28) to variational distance in Lemma 3.5, we can show that the constructed list code is asymptotically secure in the strong sense, as long as max s r ∈S r I(U r ; Z r s r ) + δ ≤ lim inf It is worth mentioning that by asymptotically secure in the strong sense, we mean that the information leakage of the confidential message to the eavesdropper decays exponentially fast. Thus, for τ > 0, we have Now, let P Ur ∈ P t 0 (U r ) converges toP Ur ∈ P(U r ) as t → ∞, such thatP Ur = P * Ur−1 ⊗ π, where π is as defined before and P * Ur−1 ∈ P(U r−1 ) being an optimal choice for the optimization problem in (49), then from (51), (53) and (55) we have lim inf 4) Code Transformation: Up until this point, we have shown that we can construct a list code C s list that can be used to establish a reliable and secure communication over the AVWC (W ⊗r •P X r |Ur , V ⊗r •P X r |Ur ). Now, we want to use this code to establish a reliable and secure communication over the AVWC (W, V).
Lett ∈ {0, . . . , r − 1}, such that for every n ∈ N, we define n = t · r +t. Since we assumed without loss of generality that U r = X r , we can transform any codeword u t r (m c , m r ) into a codeword x t·r (m c , m r ). Next, we generate the codewords x n (m c , m r ) for m c ∈ M c and m r ∈ M r by concatenating a dummy codeword xt to the transformed codeword x t·r (m c , m r ). Using this technique, we are able to transform the list code C s list constructed on the alphabet U r and with block length t into a new list codeC s list constructed on the alphabet X and with block length n. One can easily show that under this transformation mechanism, the newly constructed codeC s list will have the same reliability and secrecy performance of the original code C s list . This implies that we have constructed a list codeC s list with list size L that can be used to establish a reliable and secure communication over the AVWC (W, V), where the rate of this code is given by: Finally, since lim r→∞ and This completes our achievablity proof. Now for the converse, we start by pointing out that the average error probability of a list codeē L (C s list ) is affine in the channel. This implies thatē L (C s list ) does not change if one passes to the generalized channel state space P(S n ). Thus for q ∈ P(S n ), we havē e L (C s list ) = max s n ∈S n 1 |M c | mc x n y n :ϕ L (y n ) mc W n s n (y n |x n )E(x n |m c ) = max q∈P(S n ) 1 |M c | mc x n y n :ϕ L (y n ) mc W n q (y n |x n )E(x n |m c ) where W n q (y n |x n ) = s n q(s n )W n s n (y n |x n ) and (a) follows due to Eq. (30). If we use Eq. (59) along with Fano's inequality, then for every q ∈ P(S n ), the following holds: (60) H(M|Y n q ) ≤ 1 + λ log |M c |. Now, we are ready to derive our converse. We start by rewriting Eq. (29) as follows: where (a) follows by adding and subtracting the two sides of the inequality in (60); (b) follows similarly by applying the inequality in (31), where = 1/n(τ + 1 + λ log |M| − log L) + δ; while (c) by defining the auxiliary channel Q : M → U n . Now, if we take the limit as n → ∞, which implies that → 0, the bound in (61) simplifies to the expression of C * (W, V) in (48) and this completes our converse.
3.3. Second coding theorem. In this section, we derive another coding theorem to characterize the list secrecy capacity of AVWCs using a class of public-confidential list codes as in Definition 2.13, but we restrict our analysis to the case where the public rate vanishes.
Theorem 3.7. The public-confidential list secrecy capacity C ps list (W, V, L) of the AVWC (W, V), when the public rate vanishes, i.e. R p = 0 is characterized by the following: where W is the AVC between the transmitter and the legitimate receiver.

Corollary 1.
For an AVWC (W, V), the list secrecy capacity C s list (W, V, L) is equivalent to the public-confidential list secrecy capacity C ps list (W, V, L), when the public rate vanishes, i.e. If R p = 0, then C s list (W, V, L) = C ps list (W, V, L). Proof. We will only focus on achievability part of the second condition in Theorem 3.7, where the AVC W is not L-symmetrizable. Our proof is based on a coding scheme that combines the reliability list decoding techniques introduced in [4] for AVCs along with the results established in [26] for correlated random codes.
1-Code Structure: We construct a list code C ps list of Definition 2.13 with list size L and block length n =n +ñ by concatenating a deterministic list code C list of Definition 2.9 with list size L and block lengthn and a correlated random code C s cr of Definition 2.6 with block lengthñ. Additionally, we let the set of public messages M p encoded by the list code C list to be identical to the set of values of the common randomness G used by the correlated random code C s cr . 2-Encoding: Given a confidential message m c ∈ M c , the encoder chooses an index γ ∈ G uniformly at random and encodes it using the list code C list producing a codeword xn(γ). After that m c and γ are then given to the encoder of the correlated random code C s cr which outputs a codeword xñ γ (m c ). Finally the encoder concatenates the two codewords producing x n (γ, m c ) = (xn(γ), xñ γ (m c )) and transmits it.
3-Decoding: Having received y n = (yn, yñ), the decoder works as follows: First it uses yn and the decoder of the list code C list to produce a list of size L, i.e. G = ϕ L (yn). Then for everyγ ∈Ĝ, we use yñ along with the decoder ϕγ of the correlated random code C s cr to find a corresponding confidential messagem c ∈ M c . Thus, the final decoding output is a list of size L as follows: {(γ,m c ) ∈ G × M : ∃γ ∈ ϕ L (yn) andm c = ϕγ(yñ)}.

4-Reliability Analysis:
Based on our coding scheme, we can bound the average probability of error of C ps list given by (32) in terms of the average error probability of C list given by (23) and the average error probability of C s cr given by (16) as follows: (62)ē L (C ps list ) ≤ē L (C list ) +ē(C s cr ). Based on the results established in [2,26], we can construct a correlated random code with a secrecy rate R c close to C s cr (W, V), such thatē(C s cr ) decays exponentially fast.
It was also shown that the amount of common randomness required to construct such codes is quadratic in the block length as highlighted by [2, Lemma 6], i.e. |G| = O(ñ 2 ). This implies that the public rate R p of the list code C list is given by Since the AVC W is not L-symmetrizable, [4, Lemma 3] implies thatē L (C list ) decays exponentially fast as long as R p is small enough. According to (63), as n → ∞, R p vanishes. This implies thatē L (C ps list ) decays exponentially fast as long as R c ≤ C s cr (W, V). 5-Secrecy Analysis: Let Γ be a uniformly distributed random variable over the set G. Then, based on the structure of our coding scheme, the mean secrecy constraint in (33) can be bounded as follows: where (a) follows from the mutual information chain rule; (b) follows because M c and Zn sn are independent; (c) follows because Γ is identical to M p and the fact that M c is only encoded using C s cr ; (d) follows from the secrecy analysis of the correlated random codes in [2,23], where > 0. On the other hand, the maximum secrecy constraint in (34) can be bounded as follows: where (a) follows as in (64); (b) follows because Γ is identical to M p ; (c) follows from the secrecy analysis of the correlated random codes in [26].
The previous coding scheme shows that for an AVWC (W, V), where the AVC W is not L symmetrizable, there exists a list code C ps list with list size L, a confidential rate R c = C s cr (W, V) and a public rate R p = 0, such that C ps list satisfies the reliability constraint in (37), the mean secrecy criterion in (38) and the maximum secrecy criterion in (39). Since C s cr (W, V) = C * (W, V), this implies that C ps list (W, V) = C ps,mean list (W, V) = C ps,max list (W, V) = C * (W, V). Since, we only consider the case where R p vanishes as n → ∞, we have C ps list (W, V) = C s list (W, V) and this completes our achievability proof.
3.4. Discussion. Theorem 3.6 indicates that for an AVWC (W, V), regardless of the order of symmetrizability L(W) of the AVC W, we can always achieve a positive secrecy rate equivalent to the correlated random secrecy capacity by using a list code with list size L ≥ L(W) + 1. This implies that list codes are more powerful than uncorrelated codes because uncorrelated codes can only provide a positive secrecy if the AVC W between the transmitter and the legitimate receiver is not symmetrizable. The construction of list codes also does not require the existence of a predefined coordination between the legitimate node which makes them more efficient than correlated random codes.
Theorem 3.6 also implies that list codes provide a solution for the instability of the secrecy capacity of the AVWC (W, V). This is because if we use a sufficiently large the list size L, such that C s list (W, V, L) > 0, then for all AVWCs (W,Ṽ) in the neighborhood of (W, V) the secrecy capacity C s list (W,Ṽ, L) is also greater than zero.
The previous two points implies that we can always overcome the different jamming strategies induced by the jammer to disturb reliable communication over an AVWC (W, V) by using list codes with list size L ≥ L(W) + 1. Although this feature advocates the usage of list codes, it was shown in [4, Theorem 3] that even for simple AVWCs, where |X | = |Y| = |S| = 2, there exist some jamming strategies that lead to an arbitrarily large order of symmetrizability L(W). This implies that, for these AVWCs, we need to construct list codes with an arbitrarily large list size, which is not always feasible.
We need to highlight some of the differences between the two coding schemes used to establish Theorem 3.6. One of the major differences between these two schemes is the assumptions on the information shared between the eavesdropper and the jammer. In the first coding scheme we constructed a pure list code of Definition 2.11 using the techniques introduced in Lemma 3.2 and Definition 3.3 as shown in Section 3.2. Such coding scheme does not impose any restrictions on the information shared between the jammer and the eavesdropper. On the other hand, the second coding scheme is based on a list code of Definition 2.13 which is constructed by concatenating a non-secure list code of Definition 2.9 and a correlated random code of Definition 2.6 as shown in Section 3.3. Such coding scheme inherit the characteristics of correlated random codes, which implies that the eavesdropper is not allowed to inform the jammer about any information it can extract.
Another important difference between the two coding schemes used to establish Theorem 3.6 is related to the type of services they can provide: public and confidential. Some public services like synchronization and channel estimation are very important in real life scenarios, yet they do not need to be protected against eavesdropping. In the first coding scheme, we uses a list code C s list , which only consider a confidential message set M c . This implies that in this coding scheme both public and confidential services are protected against eavesdropping which is not needed. One the other hand the second coding scheme uses a list code C ps list , which differentiate between public and confidential messages.
It is worth mentioning that Theorem 3.6 can be very helpful in investigating the analytical properties of C s list (W, V, L), which allows us to understand how the capacity function behaves. For example, the continuity behavior studies whether a small change in the AVWC (W, V) will only lead to a small change in the capacity or a significant one. This question is not only related to the stability of the capacity function, but it also plays an important role in determining the computability of the capacity function on Turing machines [8,9]. This implies that it is important to investigate the continuity behavior of C s list (W, V, L) and in particular its discontinuity points. To do so, we need to extend the continuity analysis of the uncorrelated secrecy capacity C s uc (W, V) in [7] to the principle of list codes.
Another important analytical property is the additivity which investigates how the capacity of two orthogonal channels is related to the capacity of their orthogonal combination. This property allows us to decide between using a joint encoder-decoder pair for the combined channel or to simply use an individual one for each channel. In [23] and [25], it was shown that the uncorrelated secrecy capacity C s uc (W, V) exhibits an extreme behavior of super-additivity known as superactivation: Although the uncorrelated secrecy capacity of each of two orthogonal AVWCs is zero, the joint decoding of their orthogonal combination can lead to a strictly positive secrecy rate. It would be of high interest to investigate such behavior for the list secrecy capacity C s list (W, V, L).

Finite list size
4.1. Motivation and main results. Theorem 3.6 shows that in order to achieve a reliable and secure communication over an AVWC (W, V), using a list code C s list such that the average error probabilityē(C s list ) given by (27) and the information leakage of the confidential message L(C s list ) given by (28) vanish, we need a list code with list size L that depends on the order of symmetrizability of the W between the transmitter and the legitimate receiver, i.e. L ≥ L(W) + 1. This implies that the capability of list codes to overcome the different jamming strategies induced by the jammer depends on the order of symmetrizability of the AVC W. The problem is that it was shown in [4, Theorem 3] that even for simple AVCs, the order of symmetrizability L(W) might be arbitrarily large. This implies that, for their corresponding AVWCs, we need list codes with an arbitrarily large list size, which is not always feasible.
The previous issue motivates the investigation of the following interesting question: Does relaxing the reliability and secrecy constraints affect the value of the list size. In order to answer this question, we consider the following scenario: Given an AVWC (W, V), where we allow for a small non-vanishing probability of error λ and a small non-vanishing information leakage τ , what is the maximum rate that we can achieve using list codes and what is the smallest list size required to achieve such rate. Definition 4.1. For an arbitrary but fixed λ, τ > 0, a non negative number R is a (λ, τ )-achievable list secrecy rate with list size L for the AVWC (W, V), if for all δ > 0, there is an n(δ) ∈ N, such that for all n ≥ n(δ), there exists a sequence of list codes (C s list ) n , that satisfy the following: The supremum of all (λ, τ )-achievable list secrecy rates R with list size L is denoted by C s list (W, V, λ, τ, L). Remark 6. It is important to highlight the difference between this definition and Definition 2.12. In Definition 2.12, we need to find list codes that satisfy the reliability and secrecy constraints in (30) and (31) for all values of λ, τ > 0, while in the previous definition we need to satisfy the reliability and secrecy constraints for a certain value of λ and τ . For an AVWC (W, V) with correlated random secrecy capacity C s cr (W, V), let λ, τ > 0 be arbitrary but fixed. Then for every secrecy rate R < C s cr (W, V), there exists a finite list size L, such that R < C s list (W, V, λ, τ, L). Sketch of the proof. The proof consists of two main steps. In the first step, we show that there exists a capacity achieving correlated random codeC s cr that only utilizes a finite amount of shared common randomness |G| such that its average error probabilityē(C s cr ) given by (16) is less than or equal to λ, while the maximum information leakage of the confidential message L max (C s cr ) given by (18) is less than or equal to τ . In the second step, we show how the constructed correlated random codeC s cr can be transformed to a list code. The detailed proof steps is given in the next two sections.

Corollary 2.
For an AVWC (W, V), every secrecy rate R < C s cr (W, V) is a (λ, τ )achievable secrecy rate using a list code with list size L(λ) given by the smallest natural number that satisfy for > 0, λ ∈ (0, 1) and S is the channel state set, while lim n→∞ τ = 0.
Theorem 4.2 and Corollary 2 provide an answer for the two questions raised at the beginning of this section as follows: Theorem 4.2 implies that we can construct list codes that achieve secrecy rates up to the correlated random secrecy capacity, if we allow for a small non-vanishing probability of error and information leakage. On the other hand, Corollary 2 specifies a sufficient condition for the list size L required for the construction of such codes, which interestingly only depends on the reliability parameter λ and is independent of the secrecy parameter τ .

4.2.
Correlated random codes with finite resources. In this section, we derive an intermediate result that will help us to prove Theorem 4.2. We construct a correlated random codeC s cr that only utilizes a finite amount of shared common randomness |G|. The constructed code can provide a secure communication over the AVWC (W, V) at a rate equivalent to the correlated random secrecy capacity C s cr (W, V) such that the information leakage of the confidential message to the eavesdropper given by (18) vanishes. However, the constructed codes only assures that the average error probabilityē(C s cr ) given by (16) does not exceed an arbitrary but fixed value λ ∈ (0, 1).

Lemma 4.3.
For an AVWC (W, V), let λ ∈ (0, 1) be arbitrary but fixed. There exists a correlated random codeC s cr that can achieve secrecy rates equivalent to the correlated random secrecy capacity C s cr (W, V) using a finite setG of values of a shared common randomness, such thatē Remark 7. The previous lemma extends the result established in [10,Theorem 3] for correlated random codes with finite resources, where the work in [10] used the mean information leakage of the confidential message with respect to the weak secrecy criterion as to evaluate the secrecy performance of correlated random codes.
Proof. For an AVWC (W, V), we know from [26,Theorem 6] that there exists a correlated random code C s cr that can achieve the correlated random capacity C s cr (W, V) using a shared common randomness G whose cardinality grows quadratic in the block length n, i.e. |G| = O(n 2 ). We also know that both the average error probabilityē(C s cr ) given by (16) and the maximum information leakage L max (C s cr ) given by (18) decays exponentially fast as highlighted by the reliability and secrecy analysis in [26].
Consider a correlated random codeC s cr constructed by selecting |G| uncorrelated codes from the correlated random code C s cr used to prove the acheivability of [26,Theorem 6], whereG ⊂ G is a finite set whose cardinality is independent of n. Now, let α > 0 be arbitrary but fixed, then for a fixed channel state sequence s n ∈ S n , we have where (a) follows using the Markov inequality. If we let > 0 and use the Taylor expansion of exp(t), we can bound the expectation term in (71) as follows: where (a) follows becauseē(s n |C s uc (γ)) ≤ 1; while (b) follows as for a fixed channel state sequence s n the expectation ofē(s n |C s uc (γ)) is a negative exponential in n [23]. Now, if we take all the state sequences s n ∈ S n into consideration, we reach ≤ exp(−n λ|G| + |G| ln 2 + n ln |S|) where (a) follows due to (71) and (72); while (b) by choosing α = n . Now, if we make sure that then by (73) the average probability of error ofC s cr is smaller than λ with a probability that approaches one exponentially fast. We now turn to the secrecy performance ofC s cr . Due to the wayC s cr was constructed, we have max s n ∈S n max γ∈G I(M; Z n γ,s n |C s cr ) ≤ max s n ∈S n max γ∈G I(M; Z n γ,s n |C s cr ) where˜ > 0. The last step follows from the achievaility proof of [23,Theorem 6]. This implies that for an AVWC (W, V), we can construct a correlated random code that only utilizes a finite setG of values for a shared common randomness, where the cardinality ofG is given by (74), such that the constraints in (69) and (70) are satisfied with high probability. This completes our proof.

Proof of Theorem 4.2.
In this section, we derive a coding scheme to prove Theorem 4.2. Our coding scheme is based on transforming the correlated random code with finite resourcesC cr constructed in the previous section into a list code. This transformation is similar to one used in Section 3.3.
Proof. Our proof consists of 5 main steps as follows: 1-Code Structure: We construct a list code C s list of Definition 2.13 with list size L and block length n =n +ñ by concatenating a deterministic list code C list of Definition 2.9 with list size L and block lengthn and the correlated random codẽ C s cr with block lengthñ constructed as described in the previous section such that it only utilizes a finite set of shared common randomnessG. In our construction, we let the set of public messages M p encoded by the list code C list to be identical to the set of values of the common randomnessG used by the correlated random codẽ C s cr , i.e. M p =G . Additionally, we set the list size L as follows: L = |G| = |M p |. 2-Encoding: Given a confidential message m c ∈ M c , the encoder chooses an index γ ∈G = M p uniformly at random and encodes it using the list code C list producing a codeword xn(γ). After that m c and γ are then given to the encoder of the correlated random codeC s cr which outputs a codeword xñ γ (m c ). Finally the encoder concatenates the two codewords producing x n (γ, m c ) = (xn(γ), xñ γ (m c )) and transmits it.

3-Decoding:
Having received y n = (yn, yñ), the decoding process is sequential as follows: First yn is given to the decoder of the list code C list to produce a list of size L, i.e. ϕ L (yn) =Ĵ ∈ J L (G). Then for everyγ ∈Ĵ , we use yñ along with the decoder ϕγ of the correlated random codeC s cr to find a corresponding confidential messagem c ∈ M c . Thus, our decoder can be expressed as follows: ϕ L (y n ) =J ∈ J L (G × M c ) = {(γ,m c ) ∈G × M c :γ ∈ ϕ L (yn) andm c = ϕγ(yñ)}.

4-Reliability Analysis:
Based on our coding scheme, we can bound the average probability of error of C s list given by (32) in terms of the average error probability of C list given by (23) and the average error probability ofC s cr given by (16) as follows: (76)ē L (C s list ) ≤ē L (C list ) +ē(C s cr ). C list is a deterministic list code with list size L used to transmit the public message γ ∈ M p where L = |M p |. This implies that C list always makes a correct decoding decision. Thus, the first term in (76) vanishes, i.e.ē L (C list ) = 0. On the other hand, according to the proof of Lemma 4.3 if we choose the amount of shared randomness G utilized by the correlated random codeC s cr such that |G| satisfies the constraint in (74), then the second term in (76) is less than λ with a probability that goes exponentially to one. Thus, we have (77) P ē L (C s list ) ≤ λ] −→ n→∞ 1.

5-Secrecy Analysis:
LetΓ be a uniformly distributed random variable over the setG. Then, based on the structure of our coding scheme, the secrecy constraint in (33) can be bounded as follows: where (a) follows from the mutual information chain rule; (b) follows because M c and Zn sn are independent; (c) follows becauseΓ is identical to M p and the fact that M c is only encoded usingC s cr ; (d) follows from the secrecy analysis of the correlated random codeC s cr as in (75), where > 0. Now in order to finalize our proof, we need to show that the overall transmission rate of the constructed list code C s list is equivalent to the transmission rate of the correlated random codeC s cr . Sincen = O(log L) where L is given by (74). This implies thatn + n → n as n → ∞, which means that using the prefix code C pre on top ofC ran does not affect the overall transmission rate.

Conclusion and future work
In this paper, we studied secure communication over AVWCs. We showed that list codes can overcome the drawbacks of both correlated and uncorrelated codes, such that they can be used to establish a reliable and secure communication over a symmetrizable AVWCs in the absence of coordination resources. We provided a full characterization of the list secrecy capacity of the AVWC (W, V) showing that if the list size L is greater than the order of symmetrizablility of W denoted by L(W), then the list secrecy capacity is equivalent to the correlated random secrecy capacity. Otherwise it is zero. Our analysis emphasizes that their exist list codes that can achieve such capacity without enforcing any assumptions on the communication between the active jammer and the passive eavesdropper. This results implies for a sufficiently large list size L, list codes can always provide a secrecy rate up to the correlated random secrecy capacity. Thus, list codes can solve the instability of the secrecy capacity of AVWCs. Finally, we showed that if we relax the reliability constraint, we can construct list codes whose rates is close to the correlated random secrecy capacity using a finite list size L that only depends on the requested average error probability and is independent of the order of symmetrizability.
The characterization of the list secrecy capacity established in this paper is a necessary step in understanding the behavior of the secrecy capacity function of AVWCs under list decoding. In future work, it would be of high interest to analyze the continuity behavior of the capacity function which will allow us to investigate its Turing computability. Further, it would be also interesting to study, whether the list secrecy capacity exhibits super-additivity and super-activation behavior or not. The answers to these questions require a better understanding of the L-symmetrizability condition and the development of more explicit examples.