A unified polynomial selection method for the (tower) number field sieve algorithm

At Eurocrypt 2015, Barbulescu et al. introduced two new methods of polynomial selection, namely the Conjugation and the Generalised Joux-Lercier methods, for the number field sieve (NFS) algorithm as applied to the discrete logarithm problem over finite fields. A sequence of subsequent works have developed and applied these methods to the multiple and the (extended) tower number field sieve algorithms. This line of work has led to new asymptotic complexities for various cases of the discrete logarithm problem over finite fields. The current work presents a unified polynomial selection method which we call Algorithm \begin{document}$ \mathcal{D} $\end{document} . Starting from the Barbulescu et al. paper, all the subsequent polynomial selection methods can be seen as special cases of Algorithm \begin{document}$ \mathcal{D} $\end{document} . Moreover, for the extended tower number field sieve (exTNFS) and the multiple extended TNFS (MexTNFS), there are finite fields for which using the polynomials selected by Algorithm \begin{document}$ \mathcal{D} $\end{document} provides the best asymptotic complexity. Suppose \begin{document}$ Q = p^n $\end{document} for a prime \begin{document}$ p $\end{document} and further suppose that \begin{document}$ n = \eta\kappa $\end{document} such that there is a \begin{document}$ c_{\theta}>0 $\end{document} for which \begin{document}$ p^{\eta} = L_Q(2/3, c_{\theta}) $\end{document} . For \begin{document}$ c_{\theta}>3.39 $\end{document} , the complexity of exTNFS- \begin{document}$ \mathcal{D} $\end{document} is lower than the complexities of all previous algorithms; for \begin{document}$ c_{\theta}\notin (0, 1.12)\cup[1.45, 3.15] $\end{document} , the complexity of MexTNFS- \begin{document}$ \mathcal{D} $\end{document} is lower than that of all previous methods.


Introduction
One of the important problems in cryptography is to compute discrete logarithms over the multiplicative group of a finite field. Let p be a prime, n ≥ 1 be an integer and Q = p n . Suppose that g is a generator of the non-zero elements of the finite field F Q . The discrete logarithm problem in F * Q (loosely speaking, one talks about the discrete logarithm problem in F Q ) is the following. Given g and a non-zero element h of F Q , compute a such that g a = h. Here a is called the discrete logarithm of h to the base g.
There are two known general approaches to this problem which lead to subexponential run-times. These are the function field sieve (FFS) [1,2,18,20] and Practical issues in relation collection were considered in Gaudry et al. [10]. Using the Conjugation method for selecting polynomials, Guillevic et al. [14] reported a computation of discrete logarithm on an 170-bit MNT curve.
The improved complexities of the various versions of the tower number field sieve algorithm impacts the choice of key sizes in pairing based cryptography. Menezes, Sarkar and Singh [27] provide a concrete analysis of the various methods with the goal of determining key sizes suitable for implementing bilinear pairings at the 128bit and the 192-bit security levels. Updated estimates of secure key sizes for pairings have been proposed by Barbulescu and Duquesne [3].
Our contributions. In the (tower) NFS algorithm, there are two number fields which are represented by two polynomials f (x) and g(x) satisfying certain restrictions. The generalisation of using multiple number fields requires a set of polynomials to represent the different number fields. The overall complexity of the algorithm is determined by the choice of the polynomials, in particular their degrees and their infinity norms (i.e., the maximum of the absolute values of the coefficients). The lower the value of the degrees and the infinity norms, the faster the overall time complexity. It is, however, difficult to ensure that both the degrees and the infinity norms are simultaneously low. A number of algorithms have been proposed in the literature for polynomial selection providing various trade-offs between the degrees and the infinity norms.
In this paper, we present a new polynomial selection algorithm which we call Algorithm D. We show that the following previously proposed methods can be seen as special cases of Algorithm D.
• The Conjugation and the Generalised Joux-Lercier (GJL) algorithms by Barbulescu et al. [4] • Algorithm A by Sarkar and Singh [31]. (Algorithm A subsumes the Conjugation and the GJL methods.) • The generalised Conjugation method of Jeong and Kim [25]. (The method of Jeong and Kim subsumes the polynomial selection algorithm by Kim and Barbulescu [24].) Additionally, we show that Algorithm D can be instantiated to obtain a variant of the GJL algorithm which works in the setting of extended TNFS. Algorithm D provides polynomials to represent two number fields. Using previously proposed ideas [7,28], we show how to extend this to obtain multiple polynomials so that the multiple number fields can be used.
In terms of complexity 1 , since the best previous polynomial selection algorithms are special cases of Algorithm D, the complexities achieved using polynomials obtained from Algorithm D are never greater than what has been previously achieved. Below we highlight the cases where improved complexities for certain medium prime cases are obtained using polynomials selected by Algorithm D.
Let p = L Q (a, c p ) with 1/3 < a ≤ 2/3 and n = ηκ be such that there is a c θ > 0 with p η = L Q (2/3, c θ ). The expression for the complexity obtained in [24,25] is a function of c θ . The minimum value of the complexity (as a function of c θ ) is the best known complexity for the medium prime case and is achieved only for a particular value of c θ .
1. Using two number fields, the best known complexity for the medium prime case is L Q (1/3, (48/9) 1/3 ) [24,25] This complexity is achieved only for c θ = 12 1/3 . Algorithm D does not lower this complexity. Rather, we are able to show that for c θ > 3.39, the complexity achieved using polynomials obtained from Algorithm D is lower than the complexity of all previous algorithms using two number fields. 2. In the case of multiple number fields, the best known complexity for the medium prime case is L Q (1/3, 1.71) [24,25] and is achieved only for c θ = 2.123. Again, Algorithm D does not lower this complexity and instead we are able to show that for c θ ∈ (0, 1.12) ∪ [1.45, 3.15], the complexity achieved using polynomials obtained from Algorithm D is lower than that of all previous methods.

Basics of the tower number field sieve algorithm
Index calculus algorithms for computing discrete logarithms in a group have a general structure. A small subset of elements of the group is identified and called the factor basis. The first phase of the algorithm consists of finding linear relations between the discrete logarithms of the elements of the factor basis. This provides a system of linear equations among the discrete logarithms of the elements of the factor basis. Usually this system of linear equations turn out to be quite sparse. The second phase consists of using linear algebra techniques to solve the system of linear equations. This phase provides the discrete logarithms of the elements of the factor basis. In the third phase, the target element is decomposed over the factor basis. Using the already computed values of the discrete logarithms of the factor basis elements, this decomposition allows computing the discrete logarithm of the target element. The relation collection and the individual logarithm phases depend on the underlying group and the particular index calculus algorithm. On the other hand, the linear algebra phase is the same for all index calculus algorithms.
Typically, the linear algebra phase is performed modulo one (or, a few) large prime factor of the order of the group. This yields the discrete logarithm of the target element modulo this prime. The discrete logarithm of the target element modulo the smaller factors of the order of the group is computed using the Pohlig-Hellman and Pollard rho algorithms. The Chinese Remainder Theorem is used to combine the discrete logarithms modulo the different factors to obtain the discrete logarithm modulo the order of the group.
The TNFS algorithm is an index calculus algorithm for computing discrete logarithms over a finite field. The algorithm, though, does not directly work over a finite field. Instead, it starts with two (or more) appropriate number fields and uses suitable homomorphisms to map to the desired finite field. Consequently, the identification of the factor basis and the associated relation collection and individual logarithm phases are most conveniently expressed in terms of the background number fields.
Below we provide an overview of the TNFS algorithm. More detailed descriptions can be found in [24,6]. The TNFS algorithm applies to fields F Q where Q = p n , p is a prime and n = ηκ is a factorisation of n. Depending on the values of η and κ, several variants are obtained.
• If η = 1 (and κ = n) then this is the classical NFS algorithm.
• If η = n (and κ = 1) then the variant proposed by Barbulescu et al. in [6] is obtained where this case was called TNFS. • If 1 < η, κ < n then the variant proposed in [24] is obtained where this case was called exTNFS.
Let h(z) be a monic polynomial with integer coefficients and of degree η which is irreducible over both Z and F p . Let R := Z[z]/(h(z)). Note that F p η = F p [z]/(h(z)). Let f (x) and g(x) be polynomials in R[x] satisfying the following properties.
1. Both f (x) and g(x) are irreducible over R.
2. Over F p η , f (x) and g(x) have a common factor ϕ(x) of degree κ. The field F p n is realised as The description of TNFS algorithm involves a tower of number fields as shown in Figure 1, where K h := Q[z]/(h(z)) and K f and K g are the field extensions of K h defined by the polynomials f (x) and g(x) respectively. The polynomials h(z) ∈ Z[z] and f (x), g(x) ∈ R[x] provide two different homomorphisms to move from R[x] to the field F p n . This is shown in the commutative diagram of Figure 2. Let α f (resp. α g ) be a root of f (x) (resp. g(x)) in K f (resp. K g ). Let O f (resp. O g ) be the ring of integers of the number field K f (resp. K g ). The factor basis consists of two parts, one corresponding to K f and the other corresponding to K g . Let φ(x) ∈ R[x] be of degree less than t. The case t = 2 was explitictly considered in [6,24] and the appendix of [24] briefly considers the case t > 2. A single relation is obtained from the factorisations of the principal ideals φ(α f )O f and φ(α g )O g in O f and O g respectively when both the norms are B-smooth for a suitably chosen smoothness bound B. So, the factor basis consists of the prime ideals of K f and K g which can occur in the factorisations of φ(α f )O f and φ(α g )O g respectively when both of these norms are B-smooth.
The explicit form of the factor basis for the case of t = 2 has been provided in [6,24] and we mention this below. The factor basis is F which can be written The definition of F f is given below and the definition of F g is obtained by replacing f with g.
q is a prime in K h lying above a prime q < B, where l(f ) denotes the leading coefficient of f (x) and Disc(f ) denotes the discriminant of f (x). It has been shown in [6] that the cardinality of the factor basis is given as follows: In the asymptotic analysis, this is taken to be B 1+o (1) .
For t > 2, a suitable factor basis can be defined and for a fixed value of t, the asymptotic expression for #F remains B 1+o (1) .
We next consider the manner in which a relation is generated.
factor over F f and F g respectively giving rise to the following relations: These two relations can then be converted into a linear relation. The method for doing this is described in Section 4.3 of [21] which has later been summarised as Theorem 4 in [6]. As shown in Figure 2, starting from φ(x) there are two different homomorphisms which lead to the same element of the field F p n . Combining these homomorphisms with the factorisations of the ideals φ(α f )O f and φ(α g )O g provides two different factorisations of an element of F p n . The details of obtaining such a relation involves the notion of virtual logarithms [34,21] and Schirokauer maps [32]. Following the exposition of the technique in [21,6,24] a linear relation of the following form is obtained: Here is a large prime factor of p n −1 such that the discrete logarithm is desired to be computed modulo ; λ f,j and λ g,j arise from Schirokauer maps defined modulo the -th power of the units; r 1 and r 2 are the unit ranks of O f and O g respectively; log f (resp. log g ) is a map from ideals of O f (resp. O g ) to Z/ Z; and χ f,j : {1, . . . , r 1 } → Z/ Z and χ g,j : {1, . . . , r 2 } → Z/ Z are called virtual logarithms.
The main task of the relation collection phase is to obtain polynomials φ(x) which give rise to relations of the type given by (2). For this, it is sufficient to ensure that the norms N (f, φ) := N K f /Q (φ(α f )) and N (g, φ) := N Kg/Q (φ(α g )) are both B-smooth. As mentioned in [24], the norms can be expressed in terms of resultants as follows: where Res denotes the resultant.
The polynomials φ(x) ∈ R[x] are chosen with the restriction φ ∞ = E 2/(ηt) for an appropriate choice of E. This ensures that the number of polynomials φ(x) considered in the relation collection phase is E 2 . The time spent per polynomial depends on whether a sieving technique is used or whether each norm is tested for smoothness using Elliptic Curve factoring Method (ECM). Asymptotically, the second cost would be greater and from the discussion in Section 3 of [24], the time spent per polynomial turns out to be B o (1) . So, the total cost of the relation collection phase is E 2 B o (1) . Choosing E to be equal to B, this cost comes out to be B 2+o (1) . For more details on sieving, we refer to [10,15,36].
A little more than #F + r 1 + r 2 = B 1+o(1) relations of type (3) are collected. The resulting system of linear equations is solved using the block Wiedemann [35] algorithm to obtain the logarithms of the factor basis elements modulo the prime . The cost of the linear algebra stage is B 2+o (1) . Due to the choice E = B, this cost is equal to the cost of relation collection. The discrete logarithm of the target element is computed in the individual discrete logarithm phase of the algorithm. The idea is the following. The target element is lifted to one of the number fields, which is then written in terms of the element of factor basis using the recursive procedures outlined in [21,24,13]. The discrete logarithm of the target element can then be expressed in terms of logarithms of the factor basis elements. Substituting these values, which have already been obtained from the linear algebra step, the discrete logarithm of the desired element is obtained. Asymptotically, the cost of the individual discrete logarithm phase is dominated by the costs of the linear algebra and the relation collection phases and so this phase is not considered in the asymptotic complexity analysis of the algorithm.
The sizes of the norms N (f, φ) and N (g, φ) can be upper bounded using known upper bounds on resultants [8]. Let f(z, x) be a bivariate polynomial with integer coefficients where f i,j is the coefficient of Then, using the bounds on resultants from [8] the following upper bound on |Res z (Res [27]: Applying (5) with f = f and f = g in succession and noting that N (f, φ) and N (g, φ) are given by (4), we obtain For a given B, ensuring the B-smoothness of norms (given by (4)), depend on the values of the norms. For the complexity analysis instead of the actual values of the norms, the upper bounds given by (7) and (8) are used. These values depend on the properties of polynomials h(z), f (x) and g(x). In particular, N (f, φ) and N (g, φ) are determined by the infinity norms and the degrees of the polynomials h(z), f (x) and g(x). The degree of h(z) is η and it is usually possible to choose its infinity norm H to be small. So, the main task of reducing computational complexity boils down to choosing f (x) and g(x) having low infinity norms and low degrees. Simultaneously achieving both of these goals is difficult. As mentioned earlier, various algorithms have been proposed in the literature for choosing f (x) and g(x) offering different trade-offs between degrees and infinity norms.

A new polynomial selection method for exTNFS
The work [4] provides two methods for selecting polynomials for the classical NFS algorithm. These are called the generalised Joux-Lercier (GJL) and the Conjugation method. The GJL method is based on an earlier method due to Joux and Lercier [19] and uses the LLL algorithm to select polynomials. The GJL matrix: Given a monic polynomial ϕ(x) = ϕ 0 +ϕ 1 x+· · ·+ϕ k−1 x k−1 +x k with integer coefficients and r ≥ k, define an (r + 1) × (r + 1) matrix in the following manner: Apply the LLL algorithm to M ϕ,r and let the first row of the resulting LLL-reduced matrix be [ψ 0 , . . . , ψ r ]. This vector is taken to represent a polynomial ψ(x) = ψ 0 + ψ 1 x + · · · + ψ r x r and we write to denote the polynomial ψ(x). The determinant of M ϕ,r is p k and so by the LLL-reduced property [26], ϕ ∞ = O(p k/(r+1) ). If Q = p n , then ϕ ∞ = O(Q k/(n(r+1)) ).
Algorithm D describes the polynomial selection method for the extended TNFS. Apart from p, the other inputs are the factors η and κ of n, a divisor d of κ and an integer r ≥ κ/d. The inputs d and r provide the flexibility whereby Algorithm D can be specialised to either the GJL or the Conjugation methods. We provide more details later.
The condition gcd(η, κ/d) = 1 is required by Algorithm D. The reason is the following. The polynomial A 1 (x) has integer entries and we wish to factorise A 1 (x) over F p to obtain a factor A 2 (x) of degree k. This A 2 (x) is later used to define the polynomial ϕ(x) which is required to be irreducible over F p η . A necessary condition is that A 2 (x) must itself be irreducible over F p η . Since A 2 (x) is a polynomial of Algorithm: D: Polynomial selection for TNFS.
Choose h(z) to be a monic polynomial of degree η with small integer coefficients such that h(z) is irreducible over F p ; let k = κ/d; having the following properties: g(x) = Res y (ψ(y), C 0 (x) + y C 1 (x)) .
until f (x) and g(x) are irreducible over Q[z]/(h(z)) (and hence over R) and ϕ(x) is irreducible over F p η = F p [z]/(h(z)).
degree k with coefficients from F p which is required to be irreducible over F p η , it is necessary that gcd(η, k) = 1. The condition gcd(η, κ/d) = 1, however, does not restrict applicability. One can always choose d = κ to obtain k = 1 and so gcd(η, κ/d) = 1. Other values of d may also be appropriate, eg., if η = 3 and κ = 4, then one can choose d = 2.
Next we show how Algorithm D can be specialised to obtain the GJL and the Conjugation methods and their extensions.
1. If η = 1, d = 1, then we obtain the GJL method of [4], where different tradeoffs are obtained by varying r; if η = 1, d = κ = n, then we obtain the Conjugation method of [4]. 2. If η = 1, then we obtain Algorithm-A from [31]. 3. If d = κ and r = 1, then we obtain the method proposed by Jeong and Kim [25] which is essentially the Conjugation method in the exTNFS setting. Allowing r > 1 (or d < κ) provides a generalisation and leads to lower asymptotic complexity for certain ranges of finite fields. 4. If gcd(η, κ) = 1, then choosing d = 1 and r ≥ κ provides the exTNFS-GJL algorithm. It is mentioned in [25] that combining their techniques with the GJL method gives rise to the exTNFS-GJL algorithm.
The following result states the basic properties of Algorithm D. Bounds on the norms are obtained from the bounds on resultants given in [8] (see Section 2).
Proposition 1. The outputs f (x), g(x) and ϕ(x) of Algorithm D satisfy the following: 1. deg(f ) = d(r + 1); deg(g) = rd and deg(ϕ) = κ; 2. over F p n , both f (x) and g(x) have ϕ(x) as a factor; 3. f ∞ = O(ln(p)) and g ∞ = O(Q k/(n(r+1)) ). Consequently, if φ is a sieving polynomial, then Asymptotically, following Lemma 1 of [24], we have Note. Instead of LLL reduction, it is possible to use HKZ or BKZ reductions in Algorithm D. But the choice of lattice reduction algorithms does not matter in the asymptotic complexity analysis of TNFS, as the root Hermite factor is ignored asymptotically [9]. Practically, the lattice dimensions that arise in the application of Algorithm D are quite low. In such low dimensions, all the three reduction methods produce smallest vectors of similar norm. Our experiments confirm the same.

Asymptotic analysis
We consider the asymptotic analysis in two parts. The first part is an analysis of the asymptotic expression for the norm bound as given by (13) and the second part extends this to an asymptotic analysis of the run time of the algorithm. 4.1. Asymptotic analysis of the norm bounds. The expression for norm bounds given by (13) hides the constant factors appearing in (11) and (12) in the L Q (2/3, o(1)) factor. For an asymptotic analysis, we take o(1) to be zero so that the factor L Q (2/3, o(1)) can be taken to be 1. This gives the product of the norm bound to be E (2d(2r+1))/t ×Q t−1 d(r+1) . Similar asymptotic expressions for the product of norm bounds of polynomial selection algorithms appear in the literature [4,31,30].
In Table 1, we compare the expressions for the norm bounds for various polynomial selection algorithms. The last column in the row for exTNFS-D shows how NFS-GJL, NFS-Conj, NFS-A, exTNFS-GJL and exTNFS-gConj are obtained as special cases of exTNFS-D by suitably instantiating the parameters of the algorithm. The norm bounds obtainable from exTNFS-C are either equalled or improved by norm bounds obtainable from exTNFS-D.
The norm bounds obtained from exTNFS-JLSV1 cannot be derived as a special case of the norm bounds obtained from exTNFS-D and by implication cannot also be derived as a special case of the norm bounds obtained from either exTNFS-GJL or exTNFS-gConj. For all composite values of n from 4 to 24 and t = 2, we considered all the expressions for norm bounds that can be obtained from exTNFS-JLSV1, but not from exTNFS-D. The work [4] lists values of Q-E pairs obtained from the Table 1. Parameterised efficiency estimates for NFS obtained from the different polynomial selection methods.

Method
Norms Product Conditions CADO-NFS software. For these Q-E pairs, none of these expressions obtained from exTNFS-JLSV1 achieve the minimum value of the norm, i.e., for each expression of norm bound obtained from exTNFS-JLSV1, there is another expression for norm bound obtained from exTNFS-D which evaluates to a lower value for all the Q-E pairs given in [4]. For the Q-E pairs given in [4] we have computed the values of the products of norms for exTNFS-gConj and exTNFS-D. The plots are shown in Figure 3.

4.2.
Asymptotic run times for the medium characteristic case. Following [24], the key observation for analysing exTNFS is that by suitably choosing η, the complexity analysis of the medium prime case can be transformed to the complexity analysis of the boundary case. Our analysis below follows [24,30].
For the complexity analysis, as is customary, the o(1) terms are ignored, i.e., they are taken to be 0. Also, as in all previous works, the complexity analysis is heuristic.
Two such important heuristic assumptions are the following. 1. It is heuristically assumed that the B-smoothness behaviour of the norm of an ideal is same as that of a random integer of similar size. 2. It is heuristically assumed that the events of obtaining smoothness in the two number fields are independent so that the individual smoothness probabilities can be multiplied together to obtain the joint smoothness probability. As before, let Q = p n and n = ηκ is a non-trivial factorisation of n. Suppose that for some a with 1/3 < a ≤ 2/3, We write η in the following manner: Further, let Then it is easy to verify that Even though P is not a prime, the magnitude of P corresponds to the boundary characteristic case analysis of NFS. The substitutions p ← P , a ← 2/3, c p ← c θ and n ← κ in (15) transform (15) into (18). So, the complexity analysis of the medium characteristic case reduces to the complexity analysis of the boundary characteristic case.
We recall the following. 1. The number of polynomials to be considered for sieving is E 2 . 2. The factor base is of size B. Let As mentioned earlier, set E = B so that asymptotically, the number of sieving polynomials is equal to the time for the linear algebra step.
Let π = Ψ(Γ, B) be the probability that a random positive integer which is at most Γ is B-smooth. Let Γ = L Q (z, ζ) and B = L Q (b, c b ). Using the L-notation version of the Canfield-Erdös-Pomerance theorem, The bound on the product of the norms given by Proposition 1 is Note that in (22), t − 1 is the degree of the sieving polynomial. Following the usual convention, we assume that the same smoothness probability π holds for the event that a random sieving polynomial φ(x) is smooth over the factor base.
The expected number of polynomials to consider for obtaining one relation is π −1 . Since B relations are required, obtaining this number of relations requires trying Bπ −1 trials. Balancing the cost of sieving and the linear algebra steps requires Bπ −1 = B 2 and so Lemma 4.1. Let n = ηκ; d is a divisor of κ such that k = κ/d is co-prime to η; r ≥ k; t ≥ 2; p = L Q (a, c p ) with 1/3 < a ≤ 2/3; and η = c η (ln Q/ln ln Q) 2/3−a . Then The computation for the second relation is as follows: This leads to the following result for the medium prime case which is the analogue of Theorem 1 in [31] for the boundary case.
Theorem 4.2. Let n = ηκ; d is a divisor of κ such that k = κ/d is co-prime to η; r ≥ k; t ≥ 2; p = L Q (a, c p ) with 1/3 < a ≤ 2/3; and η = c η (ln Q/ln ln Q) 2/3−a . It is possible to ensure that the runtime of the exTNFS algorithm with polynomials chosen by Algorithm D is L Q (1/3, 2c b ) where Proof. The product of the norms is Then π −1 given by (21) is From the condition π −1 = B, we get Solving the quadratic for c b and choosing the positive root gives Theorem 4.2 can be analysed in exactly the same manner as Theorem 1 of [31] with c p in Theorem 1 of [31] being replaced by c θ . Performing the analysis shows that the minimum complexity achieved by exTNFS-D is L Q (1/3, (48/9) 1/3 ) and this complexity is achieved for c θ = 12 1/3 , d = κ, r = 1 and t = 2. The choice r = 1 converts exTNFS-D to exTNFS-gConj. So exTNFS-gConj enjoys its superiority in term of the lowest asymptotic complexity but, it is achieved only for a fixed value of c θ . The choices of r = 1 and t = 2 are not necessarily the best possible choices for other values of c θ . This is highlighted in Figure 4 which compares exTNFS-gConj with exTNFS-D(t, k, r). Note that exTNFS-gConj corresponds to exTNFS-D(t, 1, 1) with t ≥ 2. There are segments in Figure 4 where exTNFS-D(t, k, r) with k > 1 and/or r > 1 has lower complexity than exTNFS-D(t, 1, 1). In particular, we note that for c θ > 3.39, the complexity of exTNFS-D is lower than the complexities of all previous algorithms applicable to the medium characteristic finite fields.

Using multiple number fields
The multiple number field sieve algorithm uses several number fields to lower the asymptotic complexity. To be able to do this, it is required to generate several irreducible polynomials in R[x] all of which have a common irreducible factor over F p η . In this section, we describe how Algorithm D can be modified to generate such irreducible polynomials. This is an adaptation of a method described in [28].
From the manner in which f (x), g 1 (x) and g 2 (x) have been constructed, it follows that over F p η , φ(x) is a factor of all three of these polynomials. Then, φ(x) is also a factor of g i (x) for i = 3, . . . , V . It is usually easy to ensure that the polynomials In the usual way, there are homomorphisms from these number fields to the finite field F p n = F p η /(φ(x)).
The factor basis F is the disjoint union of F f (the left side of the factor basis) and F 1 , . . . , F V (the right side of the factor basis), where F f consists of ideals in O f whose norms are bounded above by B and for i = 1, . . . , V , F i consists of ideals in O i whose norms are bounded above by B . The values of B and B determine the complexity of the algorithm and are chosen so as to balance the cost of the relation collection and the linear algebra phases. The size of the entire factor basis is B + V B . During relation collection, a relation is obtained if a sieving polynomial φ(x) is smooth over F f (left side smoothness) and also over at least one of the factor bases F i (right side smoothness).
The following condition is used to balance the left and right sides of the factor basis: As a consequence, the size of the factor basis is B 1+o(1) and asymptotically the linear algebra step takes time B 2 .
The sieving polynomials φ(x) are chosen from R[x] and are of degrees at most t − 1. The coefficients of these polynomials are elements of Z[z]/(h(z)) and so are themselves polynomials in z. Let φ(x) = φ 0 (z) + φ 1 (z)x + · · · + φ t−1 x t−1 and the coefficients of φ i (z) are chosen to have E 2/(ηt) distinct values. So, the number of sieving polynomials is E 2 . To balance the cost of relation collection and linear algebra one sets E = B.
As before, let π be the probability that a random sieving polynomial φ(x) gives rise to a relation. Let π 1 be the probability that φ(x) is smooth over the factor basis for the first number field and π 2 be the probability that φ(x) is smooth over at least one of the other V factor bases. Further, let Γ 1 = Res x (f (x), φ(x)) be the bound on the norm corresponding to the first number field and Γ 2 = Res x (g i (x), φ(x)) be the bound on the norm for any of the other number fields. Recall that Γ 2 is determined only by the degree and the L ∞ -norm of g i (x). Heuristically, we have One relation is obtained in about π −1 trials and so total number of relations obtained after sieving would be E 2 π and this should be equal to B for linear algebra step to go through. Hence we have, as before, B = E = π −1 .
The following choices of B and V are made: For the case of multiple NFS we obtain the following result for the medium prime case which is the analogue of Theorem 4 in [31] for the boundary case.
Hence the overall complexity is L Q 1 3 , 2c b . The plot in Figure 5 shows that asymptotically, the variant using multiple number fields outperforms the variant using two number fields. From Theorem 5.1, the entire  [31] apply with p replaced by P and c p replaced by c θ . This shows that the best complexity achieved by MexTNFS-D is L Q (1/3, 1.71) and this complexity is achieved for c θ = 2.123, d = κ, r = 1 and t = 2. Again, the choice r = 1 converts MexTNFS-D to MexTNFS-gConj. As in the case of exTNFS, the choices of r = 1 and t = 2 are not necessarily the best possible choices for other values of c θ . In particular, we note that for c θ ∈ (0, 1.12) ∪ [1.45, 3.15], the complexity of MexTNFS-D is the same as that of MexTNFS-gConj and for c θ / ∈ (0, 1.12) ∪ [1.45, 3.15], the complexity of MexTNFS-D is lower than that of all previous methods.

Conclusion
In this paper, we have presented a new polynomial selection method for the exTNFS algorithm. This method provides a generalisation of the Conjugation method in the setting of exTNFS proposed by Jeong and Kim [25] For certain ranges of finite fields, the new method provides lower asymptotic complexity than the method of Jeong and Kim. Further, a concrete analysis of the different variants of TNFS algorithms shows that some of the new trade-offs obtained from Algorithm D provide better performance than the previous methods for some particular ranges of values of lg Q.