Computing Elliptic Curve Discrete Logarithms with Improved Baby-step Giant-step Algorithm

The negation map can be used to speed up the computation of elliptic curve discrete logarithms using either the baby-step giant-step algorithm (BSGS) or Pollard rho. Montgomery's simultaneous modular inversion can also be used to speed up Pollard rho when running many walks in parallel. We generalize these ideas and exploit the fact that for any two elliptic curve points X and Y, we can efficiently get X-Y when we compute X+Y. We apply these ideas to speed up the baby-step giant-step algorithm. Compared to the previous methods, the new methods can achieve a significant speedup for computing elliptic curve discrete logarithms in small groups or small intervals. Another contribution of our paper is to give an analysis of the average-case running time of Bernstein and Lange's "grumpy giants and a baby" algorithm, and also to consider this algorithm in the case of groups with efficient inversion. Our conclusion is that, in the fully-optimised context, both the interleaved BSGS and grumpy-giants algorithms have superior average-case running time compared with Pollard rho. Furthermore, for the discrete logarithm problem in an interval, the interleaved BSGS algorithm is considerably faster than the Pollard kangaroo or Gaudry-Schost methods.


Introduction
The discrete logarithm problem (DLP) in finite groups is an important computational problem in modern cryptography. Its presumed hardness provides the basis for security for a number of cryptographic systems. The DLP was first proposed in the multiplicative groups of finite fields. Koblitz [13] and Miller [14] were the first to suggest that elliptic curves over finite fields would have some advantages. group operations performed, or else to reduce the cost of each individual group operation. The main observation in this paper allows us to take the second approach. Precisely, we exploit the fact that, given any two points X and Y , we can compute X + Y and X − Y (in affine coordinates) in less than two times the cost of an elliptic curve addition, by recycling the field inversion. Hence, we propose improved algorithms to compute a list of consecutive elliptic curve points. We also discuss improved variants of the "grumpy giants" algorithm of Bernstein and Lange. We give a new approach to determine the average-case behaviour of the grumpy-giants algorithm, and also describe and analyse a variant of this algorithm for groups with efficient inversion. All our results are summarised in Table 3.
The paper is organized as follows. We recall the baby-step giant-step algorithm and its minor modifications in Section 2. Section 2.1 gives our new heuristic method for analysing interleaved BSGS algorithms. In Section 3, we describe a negation map variant of the baby-step giant-step algorithm and grumpy-giants algorithm. Section 4 discusses methods for efficiently computing lists of consecutive points on elliptic curve. Section 5 brings the ideas together to present faster variants of the BSGS algorithm. Details of our experiments are given in the Appendices.

The baby-step giant-step algorithm (BSGS)
In this section, we recall the baby-step giant-step algorithm for DLP computations and present some standard variants of it. The baby-step giant-step algorithm makes use of a time-space tradeoff to solve the discrete logarithm problem in arbitrary groups. We use the notation of the ECDLP as defined as in the introduction: Given P and Q to find n such that Q = nP and 0 ≤ n < N .
The "textbook" baby-step giant-step algorithm is as follows: Let M = √ N . Then n = n 0 + M n 1 , where 0 ≤ n 0 < M and 0 ≤ n 1 < M . Precompute P = M P . Now compute the baby steps n 0 P and store the pairs (n 0 P, n 0 ) in an easily searched structure (searchable on the first component) such as a sorted list or binary tree. Then compute the giant steps Q − n 1 P and check whether each value lies in the list of baby steps. When a match n 0 P = Q − n 1 P is found then the ECDLP is solved. The baby-step giant-step algorithm is deterministic. The algorithm requires 2 √ N group operations in the worst case, and 3 2 √ N group operations on average over uniformly random choices for Q.
To improve the average-case performance one can instead choose M = N/2 . Then n = n 0 + M n 1 , where 0 ≤ n 0 < M and 0 ≤ n 1 < 2M . We precompute P = M P and compute baby steps n 0 P for 0 ≤ n 0 < M and store them appropriately. Then we compute the giant steps Q−n 1 P . On average, the algorithm finds a match after half the giant steps have been performed. Hence, this variant solves the DLP in √ 2N group operations on average. The worst-case complexity is ( N group operations. A variant of the baby-step giant-step algorithm due to Pollard [18] is to compute the baby steps and giant steps in parallel, storing the points in two sorted lists/binary trees. One can show that the average-case running time of this variant of the baby-step giant-step algorithm is 4 3 √ N group operations (Pollard justifies this with an argument that uses the fact that the expected value of max{x, y}, over uniformly distributed x, y ∈ [0, 1], is 2/3; at the end of Section 2.1 we give a new derivation of this result). We call this the "interleaving" variant of BSGS. The downside is that the storage requirement slightly increases (in both the average and worst case). We remark that the average-case running time of Pollard's method does not seem to be very sensitive in practice to changes in M ; however to minimise both the average-case and worst-case running time one should choose the smallest value for M with the property that when both lists contain exactly M elements then the algorithm must terminate.
All the above analysis applies both for the discrete logarithm problem in a group of size N and the discrete logarithm problem in an interval of length N .
2.1. Grumpy giants. Bernstein and Lange [2] suggested a new variant of the BSGS algorithm, called the "two grumpy giants and a baby" algorithm. The baby steps are of the form n 0 P for small values n 0 . One grumpy giant starts at Q and takes steps of size P = M P for M ≈ 0.5 √ N . The other grumpy giant starts at 2Q and takes steps of size −P = −(M + 1)P . The algorithm is an interleaving algorithm in the sense that all three walks are done in parallel and stored in lists. At each step one checks for a match among the lists (a match between any two lists allows to solve the DLP; the case of a match 2Q − j(M + 1)P = iP implies the DLP satisfies 2n ≡ i + j(M + 1) (mod N ), and this equation has a unique solution when N > 2 is prime). The exact performance of the grumpy giants method is not known, but Bernstein and Lange conjecture that their method can be faster than Pollard rho. However [2] does not contain very much evidence to support their claim.
The crux of the analysis in [2], building on the work of Chateauneuf, Ling and Stinson [5], is to count "slopes". We re-phrase this idea as follows: After L steps we have computed three lists {iP : 0 ≤ i < L}, {Q + jM P : 0 ≤ j < L} and {2Q−k(M +1)P : 0 ≤ k < L} and the DLP is solved as long as either Q = (i−jM )P or 2Q = (i + k(M + 1))P or Q = (jM + k(M + 1))P . Hence, the number of points Q whose DLP is solved after L steps is exactly the size of the union The algorithm succeeds after L steps with probability #L L /N , over random choices for Q. The term "slopes" is just another way to express the number of elements in L L .
Note that the grumpy giants algorithm is designed for solving the DLP in a group of size N . The algorithm is not appropriate for the DLP in an interval, as the lists {iP } and {Q + jM P } might not collide at all for 0 ≤ i, j ≤ M , and the point 2Q may be outside the interval. Hence, for the rest of this section we assume P has order equal to N .
We have developed a new approach for computing approximations of the averagecase runtimes of interleaved BSGS algorithms. We use this approach to give an approximation to the average-case running time of the grumpy giants algorithm. Such an analysis does not appear in [2].
We let α be such that the algorithm halts after at most α √ N steps. When M = √ N the DLP is solved using the first two sets {iP }, {Q + jM P } after √ N steps. Hence it seems safe to always assume α ≤ 1. Note that α seems to be the main quantity that is influenced by the choice of M . Now, for 1 ≤ L ≤ α √ N one can consider the size of L L on average (the precise size depends slightly on the value of N ). Initially we expect #L L to grow like 3L 2 , but as L becomes larger then the number decreases until finally at L ≈ √ N we have #L L = N ≈ L 2 . Bernstein and Lange suggest that #L L ≈ 23 8 L 2 when L becomes close to √ N . The performance of the algorithm depends on the value #L L /L 2 so we need to study this in detail. It will be more convenient to rescale the problem to be independent of N . So for integers 0 ≤ L ≤ α √ N we write c(L/ √ N ) = #L L /L 2 . We then extend this function to c(t) for a real variable 0 ≤ t ≤ α by linearly interpolating those points. Note that #L L /N = c(L/ √ N )L 2 /N = c(t)t 2 when t = L/ √ N . We have performed simulations, for relatively small values of N , that enumerate the set L L for a range of values of L. From these simulations we can get approximate values for α and get some data points for c(t). A typical example (this is when N ≈ 2 28 and M = N/2) is 0.94 < α < 0.97 and some data points for c(t) are listed in Table 1.
Denote by Pr(L) the probability that the algorithm has solved the DLP after L steps, and Pr(> L) = 1 − Pr(L) the probability that the algorithm has not succeeded after L steps. As noted, after L = t √ N steps, for 0 ≤ t ≤ α, the algorithm succeeds with probability Pr( Hence, the probability that the algorithm has not succeeded after L = t √ N steps is Pr(> L) = (1 − c(t)t 2 ). Now, the expected number of steps before the algorithm halts is (using the fact that Pr( α Substituting t = L/ √ N and noting dL = √ N dt allows us to approximate the sum as To determine the running time it remains to estimate α 0 c(t)t 2 dt and to multiply by 3 (since there are three group operations performed for each step of the algorithm). For example, using numerical integration (we simply use the trapezoidal rule) based on the data in Table 1, we estimate α ≈ 0.97 and α 0 c(t)t 2 dt ≈ 0.55. This estimate would give running time (number of group operations) roughly We now give slightly more careful, but still experimental, analysis. Figure    It is an open problem to give a complete and rigorous analysis of the grumpy-giants algorithm.
The above analysis does not allow us to give a very precise conjecture on the running time of the grumpy-giants algorithm. Instead we performed some computational experiments to get an estimate of its average-case performance. Details of the experiments are given in Appendix A. The results are listed in Table 2 (these results are for M = N/2). In Table 3 we write the value 1.25, which seems to be a reasonable conjecture. We write 3α for the worst-case cost, which seems a safe upper bound. Overall, our analysis and experiments support the claim by Bernstein and Lange [2] that the grumpy-giant algorithm has better complexity than Pollard rho. Our work also confirms that their suggested value M = √ N /2 is a good choice. We remark that Pollard's result on interleaved BSGS can also be obtained using this theoretical argument: We have α = 1, c(t) = 1, and the constant in the running Average-case Worst-case Textbook BSGS [19] 1.5 2.0 Textbook BSGS optimised for average-case [18] 1.414 2.121 Pollard interleaving BSGS [17] 1.333 2.0 Grumpy giants [2] 1.25 * ≤ 3 Pollard rho using distinguished points [20] 1.253 ∞ Gaudry-Schost [7] 1.661 ∞ BSGS with negation 1.0 1.5 Pollard interleaving BSGS with negation 0.943 1.414 Grumpy giants with negation 0.9 * ≤ 2.7 Pollard rho using negation [3,21] 0.886(1 + o(1)) ∞ Gaudry-Schost using negation [8] 1  (1)) √ N group operations for large enough groups of size N . The first block lists algorithms for general groups, and all these results are known (see Section 2). The values for the grumpy-giant algorithm (marked by an asterisk) are conjectural and the values for the rho and Gaudry-Schost algorithm are heuristic. The second block lists algorithms for groups having an efficiently computable inversion (see Section 3). Some of these results are new (the first one appears as an exercise in the first author's textbook). The third block lists algorithms that exploit efficient inversion as well as our main observation, and these results are all new (see Section 5).

2.2.
Summary. The known results, and new results of our paper, are summarised in Table 3. Out of interest we include the values π/2 (and π/4 when using negation) for the average-case number of group operations of Pollard rho using distinguished points. We also include entries for versions of the Gaudry-Schost algorithm (which performs better than the Pollard kangaroo method). The +o(1) term accounts for many issues (e.g., initial set-up costs, "un-needed" group elements computed in the final block, effects from the number of partitions used to define pseudorandom walks, expected time to get a distinguished point, dealing with cycles in random walks on equivalence classes, etc).

Using efficient inversion in the group
It is known [9,23,3,22] that the negation map can be used to speed up the computation of elliptic curve discrete logarithms. Recall that if P = (x P , y P ) is a point on a Weierstrass model of an elliptic curve then −P has the same x-coordinate. As mentioned, one can speed up the Pollard rho algorithm for the ECDLP by doing a pseudorandom walk on the set of equivalence classes under the equivalence relation P ∼ ±P . More generally, the idea applies to any group for which an inversion can be computed more efficiently than a general group operation (e.g., in certain algebraic tori).
One can also speed up the baby-step giant-step algorithm. We present the details in terms of elliptic curves. Let M = √ N . Then n = ±n 0 + M n 1 , where − M 2 ≤ n 0 < M 2 and 0 ≤ n 1 < M . Compute M/2 baby steps n 0 P for 0 ≤ n 0 ≤ M/2. Store the values (x(n 0 P ), n 0 ) in a sorted structure. Next, compute P = M P and the giant steps Q − n 1 P for n 1 = 0, 1, . . . . For each point computed we check if its x-coordinate lies in the sorted structure. If we have a match then Q − n 1 P = ±n 0 P and so Q = (±n 0 + M n 1 )P and the ECDLP is solved. Compared with the original "textbook" BSGS algorithm optimised for the average case, we have reduced the running time by a factor of √ 2. This is exactly what one would expect.
Readers may be confused about whether we are fully exploiting the ± sign. To eliminate confusion, note that a match x(Q − n 1 P ) = x(n 0 P ) is the same as ±(Q − n 1 P ) = ±n 0 P , and this reduces to the equation Q − n 1 P = ±n 0 P .
One can of course combine this trick with Pollard's interleaving idea. Take now M = √ 2N so that n = ±n 0 +M n 1 with 0 ≤ n 0 , n 1 ≤ M/2. The algorithm computes the lists of baby-steps {x(n 0 P )} and giant steps {x(Q − n 1 P )} in parallel until the first match. Proof. The worst-case number of steps (this is an interleaved algorithm so a "step" now means computing one baby step and one giant step) is M/2, giving a total cost of 2(M/2) = M = √ 2N group operations. By Pollard's analysis [18], generating both walks in parallel leads to a match in 4(M/2)/3 group operations on average. The leading constant in the running time is therefore 2 √ 2/3 ≈ 0.9428.
We can also prove this result using the method from Section 2.1. The number of points Q whose DLP can be solved after L steps is 2L 2 (since we have Q = (±i + jM )P ) and so c(t) = 2 for 0 ≤ t < α. When M = √ 2N then α = 1/2 and so the constant in the running-time is One might wonder if there is a better way to organise the interleaving. Since one gets two baby-steps for each group operation it would be tempting to take more giant steps on average than baby steps. However, the goal is to maximise the number 2L 1 L 2 of points Q solved after L 1 + L 2 steps (where L 1 and L 2 denote the number of group operations spent computing baby-steps and giant-steps respectively). This boils down to maximising f (x, y) = 2xy subject to x + y = 1, which is easily seen to have the solution x = y = 1/2. Hence, the optimal way to organise the interleaving is to use the same number of group operations for baby-steps and giant-steps for each time L.
3.1. Grumpy giants with negation. One can consider the grumpy giants method where matches are detected using x(iP ), x(Q + jM P ), x(2Q − k(M + 1)P ). This algorithm is not mentioned in [2]. Hence, one of the contributions of our paper is to develop the algorithm in this case and analyse it. The first task is to count "slopes". After L steps we have computed three lists {x(iP ) : 0 ≤ i < L}, {x(Q + jM P ) : 0 ≤ j < L} and {x(2Q − k(M + 1)P ) : 0 ≤ k < L}. A collision between the first two lists implies Q+jM P = ±iP and so Q = (±i− jM )P . A collision between the first and third lists implies 2Q − k(M + 1)P = ±iP and so Q = 2 −1 (±i + k(M + 1))P . A collision between the second and third lists implies either Q + jM P = 2Q − k(M + 1)P or Q + jM P = −2Q + k(M + 1)P . Hence we have either Q = (jM + k(M + 1))P or Q = 3 −1 (k(M + 1) − jM )P . The relevant quantity to consider is the size of the union We follow the heuristic analysis given in Section 2.1. Let α be such that the algorithm halts after at most α √ N steps. We also define c(t) to be such that the number of elements in L L is equal to c(L/ √ N )L 2 . We have conducted simulations, for various values of N (again, the values of α and c(t) depend on N and on the choice of M ). A typical example (this is for N ≈ 2 28 and M = N/2) has 0.87 < α < 0.92 and the values for c(t) as in Table 4. Figure 2 reports on our simulations, again by computing L L exactly for various choices of N . Our experiments suggest that M = N/2 is the best choice, for which 0.9 ≤ α ≤ 0.96. The integral under the curve, computed using numerical methods, is in the interval [0.58, 0.60], giving a running time c √ N for 3(0.9 − 0.6) = 0.9 ≤ c ≤ 1.14 = 3(0.96 − 0.58). Table 5 gives our experimental results. Details about the experiment are given in Appendix A. The average running time seems to be around 0.9 √ N , so that is the value we put in Table 3. Note that 1.25/0.9 ≈ 1.39 < √ 2 so we have not obtained a speed-up by a factor of approximately √ 2. Hence, this algorithm seems to be not faster than Pollard rho on equivalence classes [3,21]. However, it does still seem to be an improvement over Pollard's interleaved version of the standard BSGS algorithm.

Computing lists of elliptic curve points efficiently
The BSGS and Pollard rho algorithms require computing elliptic curve operations in affine coordinates rather than projective coordinates (otherwise collisions are not detected). Affine arithmetic requires inversion in the field, and this operation is considerably more expensive than multiplications in the field or other operations.   Table 5. Results of experiments with the grumpy-giants algorithm exploiting efficient inversion.
A standard technique is to use the Montgomery trick [15] to offset the cost of inversions, but this technique in the context of baby-step giant-step algorithms has not previously been fully explored.

Elliptic curve group law for affine Weierstrass equations. Let
E : y 2 + a 1 xy + a 3 y = x 3 + a 2 x 2 + a 4 x + a 6 be a Weierstrass equation for an elliptic curve and let X = (x 1 , y 1 ) and Y = (x 2 , y 2 ) be two general points on E (in other words, we assume Y = ±X). Recall that −X = (x 1 , −y 1 − a 1 x 1 − a 3 ). Let (x 3 , y 3 ) = X + Y . (We are interested in affine coordinates rather than projective coordinates as the BSGS algorithm cannot be used with projective coordinates.) Then Let M, S and I denote the cost of a multiplication, squaring and inversion in F q respectively. We ignore the cost of multiplications by fixed constants such as a 1 , since these are often chosen to be 0 or 1. Then, the cost of point addition over E(F q ) is close to I + 2M + S. One can check that point doubling requires I + 2M + 2S. Therefore the main cost of point addition is the computation of field inversion: (x 2 − x 1 ) −1 . Extensive experiments from [6] suggest that a realistic estimate of the ratio I/M of inversion to multiplication cost is 8 (or higher). Now, note that (x 4 , y 4 ) = X − Y = X + (−Y ) = (x 1 , y 1 ) + (x 2 , −y 2 − a 1 x 2 − a 3 ) as follows: We see that we can re-use the inversion (x 2 − x 1 ) −1 and hence compute X − Y with merely the additional costs 2M + S. Taking I = 8M and S = 0.8M the cost of point "combined ± addition" is I + 4M + 2S = 13.6M. Compared with the cost 2(I + 2M + S) of two additions, we have a speedup by a factor of If the cost of inversion is much higher (e.g., I ≈ 20M) then the speedup is by a factor close to 2. Elliptic curves over non-binary finite fields can be transformed to Edwards form x 2 + y 2 = c 2 (1 + x 2 y 2 ), with (0, c) as identity element. Edwards curves can be efficient when using projective coordinates, but we need to use affine coordinates for BSGS algorithms. Let X = (x 1 , y 1 ) and Y = (x 2 , y 2 ), the addition law is The cost of addition is therefore I + 6M. Since −(x, y) = (−x, y), we have , .
We can re-use the inversions and compute X − Y with merely the additional costs 2M. So the speedup is The speedup looks good, but note that the cost of combined ± addition is I + 8M which is slower than the I + 4M + 2S when using Weierstrass curves. The observation that one can compute X + Y and X − Y efficiently simultaneously was first introduced by Wang and Zhang [22], and they tried to apply it to the Pollard rho algorithm. This seems not to be effective, since the Pollard rho algorithm uses random walks that go in a "single direction", whereas the combined operation X + Y and X − Y gives us steps in "two different directions". The well-known Montgomery simultaneous modular inversion trick [15] allows to invert k field elements using one inversion and 3(k − 1) multiplications in the field. Such ideas have been used in the context of Pollard rho: one performs k pseudorandom walks in parallel, and the group operations are computed in parallel by using simultaneous modular inversion. However the BSGS algorithm as usually described is inherently serial, so this approach does not seem to have been proposed in the context of BSGS.
The aim of this section is to describe efficient ways to compute a list of points of the form L = {S + [i]T : 0 ≤ i < M }. The serial method is to compute M point additions, each involving a single inversion.
A way to make this computation somewhat parallel is the following. Fix an integer k and set M = M/k . The list will be computed as S + (jM + i)T for 0 ≤ j < k, 0 ≤ i < M . One can then compute L using Algorithm 1. Compute in parallel using Montgomery trick T j = T j + T with 0 ≤ j < k In other words, we can significantly reduce the overall cost of performing inversions, by taking a few extra multiplications. Next we explain how to do even better: we can essentially halve the number of inversions with only a few more additional multiplications.

4.3.
Improved algorithm for computing lists of points. We now combine the X ± Y idea with simultaneous modular inversion. Both tricks replace an inversion with a couple of multiplications. Recall from Algorithm 1 that we are already computing the list L as {S + (jM + i)T : 0 ≤ j < k, 0 ≤ i < M } where the points T j = S + jM are precomputed and the additions +T are parallelised with simultaneous modular inversion.
The idea is to also precompute T = 3T . For 0 ≤ i < M /3 we compute the sums T j ± T and then T j + T . In other words, we compute three consecutive points T j +(3i−1)T, T j +(3i)T, T j +(3i+1)T using fewer than three full group operations.
To get a greater speedup we use more of the combined operations. Hence instead of using T = [3]T we can take T = [5]T . Then in the i-th iteration we compute T j = T j +T = T j +5T and then also T j ±T and T j ±2T (where 2T is precomputed). In other words, we compute a block of 5 consecutive points T j + (5i − 2)T , T j + (5i − 1)T , T j + (5i)T , T j + (5i + 1)T and T j + (5i + 2)T with one addition and two "combined ± additions". Extending this idea one can use T = (2 + 1)T , and then precompute 2T, . . . , T . We give the details as Algorithm 2. for u = 1 to do 8: Compute in parallel T 0 ± uT, T 1 ± uT, . . . , T k−1 ± uT 9: Add the 2k new points to L Ignoring precomputation, this is roughly (M/(k(2 + 1)))I + 7 2 M M + M S. Proof. The precomputation in step 3 is at most + 2 + 2 log 2 (M/k) elliptic curve operations. Similarly, step 4 is k group operations (in serial).
The main loop has M/(k(2 + 1)) iterations. Within that there is a loop of iterations that performs one simultaneous modular inversion (cost I + 3(k − 1)M) followed by the remaining 2M + S for 2k point additions. After that loop there is a further simultaneous modular inversion and k additions. So the cost of each iteration of the main loop is (I + 3(k − 1)M + 2k(2M + S)) + (I + 3(k − 1)M + k(2M + S)).

The result follows.
The important point is that we have reduced the number of inversions further and reduced the number of multiplications. Again, a naive argument suggests a speedup by up to (3.5 + 0.8)/(10.8) = 0.39 compared with the standard serial approach, or 4.3/5.8 = 0.74 compared with using Algorithm 1.

Application to BSGS algorithm
It is now straightforward to combine the ideas of Section 4 to the various babystep giant-step algorithms for the DLP. All such algorithms involve computing two or three lists of points, and such operations can be sped-up by the general techniques in Section 4. It follows that we get a saving by a constant factor for all such algorithms (see Algorithm 3 for an example of the standard BSGS algorithm using efficient inversion; the details for the grumpy-giants algorithm are similar). This explains the last rows of Table 3.
However, note that there is again a tradeoff in terms of the size of blocks (and hence the values of (k, )) and the running time. The BSGS algorithm will compute a block of points in parallel and then test for matches between the lists. Hence, the algorithm will usually perform more work than necessary before detecting the first match and solving the DLP. It follows that the average number of group operations performed is increased by a small additive factor proportional to k (this contributes to the o(1) term in the running time).
Algorithm 3 Interleaved baby-step giant-step algorithm for elliptic curves, exploiting efficient inversion (i.e., using x-coordinates), and computing points in blocks. Use Algorithm 2 to compute block of baby steps (x(n 0 P ), n 0 ) for 0 ≤ n 0 ≤ M/2 and store in easily searched structure L 1

5:
Use Algorithm 2 to compute block of giant steps (x(Q − n 1 P ), n 1 ) for 0 ≤ n 0 ≤ M/2 and store in easily searched strucutre L 2

6:
if L 1 ∩ L 2 = ∅ then Hence, all the results in Table 3 can be multiplied by 0.4, and that is what we have done to get the first two rows in the final block (e.g., 0.9 · 0.4 = 0.36 is the claimed constant in the running time of the grumpy-giants algorithm). For comparison, we consider the Pollard method implemented so that k walks are performed in parallel using Montgomery's simultaneous modular inversion method. As we explained, this method gives an asymptotic speed-up of 0.53, meaning the constant in this parallel Pollard rho algorithm is 0.53·0.886(1+o(1)) = 0.47(1+o (1)). This value is the final entry in Table 3. Our conclusion is that, in this optimised context, both the interleaved BSGS and grumpy-giants algorithms are superior to Pollard rho.

Other settings
There are other groups that have efficient inversion. For example, consider the group G q,2 of order q + 1 in F * q 2 . This can be viewed as the group of elements in F * q 2 of norm 1. A standard fact is that one can compute a product h * g of two elements in F q 2 using three multiplications in F q (and some additions). Now, when g lies in the group G q,2 then computing g −1 is easy (since gg q = Norm(g) = 1, so g −1 = g q is got by action of Frobenius. So if g = u + vθ then g −1 = u − vθ. It follows that one can compute hg and hg −1 in four multiplications, only one multiplication more than computing hg. So our BSGS improvement can be applied in this setting too. The speedup is by a factor of 4/6 = 2/3 since we need 4 multiplications to do the work previously done by 6 multiplications. This is less total benefit than in the elliptic curve case, but it still might be of practical use. The group G q,2 is relevant for XTR. One way to represent XTR is using elements of G q 3 ,2 . We refer to Section 3 of [11] for discussion. The cost of an "affine" group operation in the torus representation of G q,2 is I +2M, which is worse than the cost 3M mentioned already. So when implementing a BSGS algorithm it is probably better to use standard finite field representations.

Conclusion
Our work is inspired by the idea that the negation map can be used to speed up the computation of elliptic curve discrete logarithms. We explain how to compute lists of consecutive elliptic curve points efficiently, by exploiting Montgomery's trick and also the fact that for any two points X and Y we can efficiently get X − Y when we compute X + Y by sharing a field inversion. We use these ideas to speed up the baby-step giant-step algorithm. Compared to the previous approaches we achieve a significant speedup for computing elliptic curve discrete logarithms.
We also give a new method to analyse the grumpy-giants algorithm, and describe and analyse a variant of this algorithm for groups with efficient inversion.
The new algorithms, like the original, have low overhead but high memory. This means that our algorithms are useful only for discrete logarithm problems small enough for the lists to fit into fast memory. For applications such as [4] that require solving DLP instances in a short interval, the best method by far is Pollard's interleaved BSGS algorithm together with our idea of computing points in blocks; the grumpy-giants algorithm is not suitable for this problem and the Pollard kangaroo and Gaudry-Schost methods will take about twice the time.  Table 6. Results of experiments with the 4-giants algorithm without negation.
10000 trials was computed and represented in the form c √ N . Repeating this for each of the groups G gives a list of 100 values c, each of which is an estimate of the average-case running time of the algorithm in one of the groups G. Then we computed the mean and standard deviation of the 100 estimates c, giving us a good estimate of the average case complexity c i √ N of the algorithm for (i + 1)-bit group orders. This is the value reported. Table 2 gives the mean values obtained for c i over such experiments with the original grumpy-giants algorithm with M = N/2. Table 5 gives the mean values obtained for the grumpy-giants algorithm using efficient inversion with M = N/2.

Appendix B. 4-giants algorithm
In [2] a 4-giants algorithm is mentioned (two in one direction, two in the other, with computer-optimized shifts of the initial positions) as a variant of 2-giants algorithm. The paper did not describe the 4-giants algorithm in detail. However, we implemented a possible version of the 4-giants algorithm, and the experimental results do not seem to be as good as one may have expected. It seems that the 4-giants algorithm is not as efficient as the 2-giants algorithm.
More precisely, the baby steps are of the form n 0 P for small values n 0 . The first grumpy giant starts at Q and takes steps of size P = M P for M ≈ 0.5 √ N . The second grumpy giant starts at 2Q and takes steps of size P = −(M + 1)P . The third one starts at 3Q and takes steps of size P = (M +2)P . The fourth one starts at 4Q and takes steps of size P = −(M + 3)P . The algorithm is an interleaving algorithm in the sense that all five walks are done in parallel and stored in lists. At each step one checks for a match among the lists (a match between any two lists allows to solve the DLP). We performed experiments in the same framework as above and summarize the results in the following table. One sees that the constants are larger than the 1.25 of the 2-giants method.