SOLVING HIGHLY-OSCILLATORY NLS WITH SAM: NUMERICAL EFFICIENCY AND LONG-TIME BEHAVIOR

. In this paper, we present the Stroboscopic Averaging Method (SAM), recently introduced in [7, 8, 10, 13], which aims at numerically solving highly-oscillatory diﬀerential equations. More speciﬁcally, we ﬁrst apply SAM to the Schr¨odinger equation on the 1-dimensional torus and on the real line with harmonic potential, with the aim of assessing its eﬃciency: as compared to the well-established standard splitting schemes, the stiﬀer the problem is, the larger the speed-up grows (up to a factor 100 in our tests). The geometric properties of SAM are also explored: on very long time intervals, symmetric implementations of the method show a very good preservation of the mass invariant and of the energy. In a second series of experiments on 2-dimensional equations, we demonstrate the ability of SAM to capture qualitatively the long-time evolution of the solution (without spurring high oscillations).

variable v ε (τ ) = u ε (t) with τ = εt a format for which ε clearly appears as the inverse of a frequency going to ∞ for ε going to 0. Note that the SAM is designed for problems with a single frequency. In the two equivalent forms considered above, oscillations are extrinsic, i.e. explicitly present in the vector field f , as this is frequently the case in equations arising in practice and originating from physics or chemistry. Oscillations might also be intrinsic: in the application considered here, namely the Schrödinger equation, the problem in its original form is autonomous and no periodic variable is apparent. Nevertheless, it may be reformulated as in (1.1). Generally speaking, equations of the formv ε (t) = a(v ε (t)) + εb(v ε (t)) (1.2) where the flow map χ t ofv = a(v) is periodic, can easily be rewritten, through the change of variable v ε (t) = χ t (u ε (t)), in the form (1.1) with and this transformation is extremely simple when a is a linear map. Problem (1.1) is notoriously difficult to solve since numerical methods, in order to achieve some accuracy, are forced to follow more and more oscillations (refer to (1.1)) as ε becomes smaller and smaller, whereas the long-term dynamics is often what only matters in applications. Various numerical methods have been proposed in the literature for solving such problems. Standard methods such as Strang splitting or compositions thereof, which aim at solving (1.2) directly, suffer from severe step size restrictions as ε goes to zero. More elaborated methods for (1.2) when a is linear, introduce filter functions in various ways (they are usually referred to as Gautschitype methods) and bypass some of limitations of splitting techniques, although not in a completely satisfactory way as resonances are still present and/or geometric properties not preserved (see [19], Chapter XII for a survey). The technique we present and use here proceeds in a completely different way. It relies upon the existence of an asymptotic high-order (in ε) averaged equation associated with (1.1) and aims at approximating numerically the solution thereof through a micro-macro strategy. The underlying averaged equation being autonomous and in particular smooth w.r.t. ε, it can be solved with a macrointegrator which benefits from the smallness of ε, hence with a computational cost which is essentially independent of ε (the effect of the time-interval T /ε becoming larger with ε → 0 is indeed counterbalanced by the possibility to use also larger macro-steps H = O(1/ε)). Evidently, F ε needs to be computed in some way or another. Analytical expressions, though available, are becoming increasingly complex for high orders and are not easily amenable to practical computations. The strategy we use consists in solving the original equation (1.1) with a micro-integrator over several periods and then combining the resulting values to approximate F ε through finite difference formulas. Here, the stroboscopic character of SAM is crucial, as it allows to assert that the solutions of (1.1) and (1.3) coincide for values of t that are multiple of the period. Altogether, SAM is a micro-macro procedure using only the original vector field f and providing approximations of the exact solution of (1.3). We emphasize here again that its computational cost, in sharp contrast with standard integrators, is independent of ε. A detailed description of stroboscopic averaging at both theoretical and practical levels shall be given in sections 2 and 3. Another fundamental property of SAM is its geometric character. As a matter of fact, it has been shown, first for ODEs [13] and later on for evolution equations in a Hilbert space [10], that the averaged vector field F ε in (1.3) inherits the properties of f . In particular, at a formal level, F ε is Hamiltonian whenever f is, and (1.3) has the same first integrals as f . For the case considered here, the quantities of interest are the Hamiltonian of the Schrödinger equation and the mass (L 2 -norm) of the solution. These two quantities are still invariants of the associated averaged equation and it is of importance to design a version of SAM which also preserves these invariants numerically. This aspect will be further discussed in Section 3.
In this work, we apply SAM algorithm 1 to the nonlinear Schrödinger equation either on the torus T 2π = [0, 2π] with A = −∂ 2 x and periodic boundary conditions, or on R (Gross-Pitaevskii) with A = − 1 2 ∂ 2 x + 1 2 (x 2 − 1). In both cases, the operator A has its spectrum included in Z so that e −itA is 2π-periodic in time. The equation can thus be regarded as an evolution equation of the form (1.2) in a functional Hilbert space X = H s . The first case is considered as a test equation to assess the efficiency of the method (see Subsection 4.1): we demonstrate experimentally that the method can be as much as 100 times faster than Strang splitting when ε is small. We then show in Subsection 4.2, that whenever the macro-integrator is chosen appropriately, i.e. according to the discussion of Subsection 3.2, SAM provides a numerical solution with constant mass along which the Hamiltonian is preserved up to a small error (which does not drift in time). The Gross-Pitaevskii equation in one dimension is used similarly to confirm the error analysis of Subsection 3.
Finally, the objective of Section 5 is to illustrate how SAM can be used to explore the qualitative behavior of highly-oscillatory systems, in particular here the dynamics of the modes of the solution. The nonlinearity in Schrödinger equations indeed induces coupling effects and energy transfers. To this aim, we shall consider three NLS models: Gross-Pitaevskii equation in one dimension and two models in two dimensions in the context of anisotropic confinement.
2. Presentation of the stroboscopic averaging method. The so-called Stroboscopic Averaging Method (SAM in brief) was introduced in [7,8,13] for the purpose of solving highly-oscillatory ordinary differential equations. Its foundations rely on the asymptotic technique of stroboscopic averaging, whose aim is to write the exact solution of a differential system as the composition of a periodic (rapidly oscillating in time) change of variables with the flow of an autonomous (non-stiff) differential equation 2 . While various choices are possible for this change of variable, it is constructed in the framework of stroboscopic averaging so as to coincide with the identity map at times that are multiple of the period, a property which will be crucial in the design of SAM. The relevance of this decomposition in the case of infinite dimensional Banach spaces was further analyzed in [10] and for application to the Schrödinger equation, we now present it in this context. It should be stressed that the theoretical background of this section is not required in the effective implementation of SAM.
2.1. Stroboscopic change of variable and associated averaged vector field. Given a differential equation, posed in a Banach space X, of the forṁ where f is a smooth map from T × X into X (with T ≡ R/(P Z)) and where it it assumed that u ε (t) exists on [0, T /ε] for some positive T , it can be shown that there exist a smooth change of variables Φ ε : T × X → X and a smooth vector field is an accurate approximation of the solution u ε (t) of (1.1) in the sense that where C is a positive constant independent of ε. Here, Ψ ε (t, u 0 ) denotes the t-flow of the differential equatioṅ For such a result to hold, several assumptions are required, among which the most stringent one is the analyticity of f with respect to its second argument u ∈ X. In contrast, only the continuity of f w.r.t. its first argument is imposed. The next theorem (we refer to [13] for a proof in the finite dimensional case and to [10] in the context of Banach spaces) sums up the above discussion in precise mathematical terms: Theorem 2.1. Assume that (i) f is continuous w.r.t. its first variable, (ii) f is analytic w.r.t. to its second variable, (iii) there exist ε 0 > 0 and a bounded open subset K of X such that u ε (t) exists and remains in K for all 0 < ε < ε 0 and all t ∈ [0, T /ε] and (iv) f is bounded on T × K. Then there exist Φ ε (continuous and P -periodic w.r.t. its first variable, analytic w.r.t. to its second variable) and F ε (analytic), and constants 0 < ε 1 < ε 0 and C > 0, such that where Ψ ε is the flow of the differential equatioṅ Furthermore, if assumption (iii) is valid with T = +∞, then estimate (2.8) holds on [0,T /ε 1+α ], for anyT > 0 and any 0 < α < 1, with constants ε 1 and C now depending on α andT .
Remark 2.1. It should be noticed that the dynamics of the weakly nonlinear equation (1.1) becomes non-trivial on intervals of length greater than T /ε for which the variation of u ε (t) is expected to be of size O(1). Estimate (2.8) itself is thus highly non-trivial on intervals of lengthT /ε 1+α .

Remark 2.2.
Whenever the function f is not analytic but only of class C k w.r.t. its second variable, a weaker result still holds with an error estimate of the form where Φ ε and F ε are now only k-times differentiable.
Remark 2.3. The so-called averaged vector field F ε has an expansion in powers of ε of the form F ε (u) = F 1 (u) + εF 2 (u) + . . ., whose terms are built up from f and its derivatives: for instance, the first two terms read f denotes the usual Lie-bracket of smooth functions. The next terms in the expansion can also be explicitly written down using either the methodology used in [10] or a formal nonlinear Floquet-Magnus expansion exposed in [6]. Nevertheless, they require the resolution of complicated recurrence formulas.

2.2.
Geometric aspects of stroboscopic averaging. Assume now that X is a Hilbert space (with scalar product (·, ·) X ) and that it is densely continuously embedded in some ambient Hilbert space Z (with real scalar product (·, ·) Z ). The vector field f is said to be Hamiltonian if there exists a bounded map J ∈ GL(X), skew-symmetric w.r.t. (·, ·) Z , and an analytic function H : Accordingly, we say that Φ ε (t, u) is symplectic if Finally, a smooth function I ε : T × X → R (possibly depending on ε) is said to be an invariant of (1.1) if it satisfies which implies that ∀t, I ε (t, u ε (t)) ≡ I ε (0, u 0 ).
A number of important properties of stroboscopic averaging stem from the fact that both the change of variable Φ ε and the averaged vector field F ε inherit the intrinsic properties of the system. In particular (see [10,13]): • If the original equation (1.1) is Hamiltonian, then Φ ε is symplectic and F ε is Hamiltonian. • If (1.1) is divergence-free, then Φ ε is volume-preserving and F ε is divergencefree. • If I ε : T × X → C is an invariant of (1.1) , then I ε (0, ·) : X → C is preserved by Φ ε and is an invariant of F ε . The second point is important in applications to kinetic equations for instance, although not relevant to our application to Schrödinger equation, for which it appears as a mere consequence of its Hamiltonian character. Hence, it will not be discussed further here.
In order to later analyze the numerical experiments on a scientifically sound ground, we quote the following results from [10], also proved in [13] in the finitedimensional context: [Stroboscopic averaging preserves the Hamiltonian structure] Under the assumptions of Theorem 2.1, suppose that f is Hamiltonian with Hamiltonian H. Then Φ ε and F ε are respectively symplectic and Hamiltonian up to exponentially small perturbation terms, namely there exists an analytic function Remark 2.4. The Hamiltonian H ε of the averaged equation, similarly to F ε , also has an expansion in ε of the form H ε (u) = H 1 (u) + εH 2 (u) + . . .. Its terms are built up from H and its derivatives: for instance, the first two terms read

Theorem 2.3. [Stroboscopic averaging preserves invariants]
Under the assumptions of Theorem 2.1, suppose that the function I ε : T×X → R is an invariant of the field f , i.e. satisfies (2.10) for any (t, u) ∈ T × K and is analytic w.r.t. its second variable. Then, Φ ε and F ε satisfy for 0 < ε < ε 1 and (t, 13) for some positive constant C. In particular, we have Remark 2.5. By using an iterative argument (see for instance Theorem 8.1 in [19]), it is easy to deduce from (2.14) that the Hamiltonian H is preserved along the averaged solution over exponentially long times: This statement will be illustrated in numerical experiments in Section 4.2.
Remark 2.6. Both previous theorems still hold whenever the functions considered (i.e. f and I ε ) are only of class C k . In this case, the e −C/ε -term has to replaced by ε k+1 .
An important particular case arising in practice concerns the following autonomous semi-linear differential equation (a class to which Schrödinger equation belongs)ẇ where A is a linear (possibly unbounded) self-adjoint operator such that e tJ −1 A ∈ L(X, X) is P-periodic w.r.t. t. It is then straightforward to recast this system in the format of (1.1) by performing the preliminary change of variable (which is distinct and so is equation 2.3. SAM, a numerical counterpart of stroboscopic averaging. The differential equation (2.7) being non stiff, it makes good sense to try to approximate it rather than to solve the original stiff problem (1.1). However, a general-purpose numerical method can not rely on the analytical computation of such terms as those involved in F ε and this rules out the direct resolution of equation (2.7). In the sequel, we rather solve it by approximating F ε "on the fly", which is the idea at the core of SAM and very much in the spirit of Heterogeneous Multiscale Methods (see [14,15,17,16]). In order to obtain an approximation of F ε (u) at a given point u ∈ X, we first use the group property of Ψ ε to assert that and then approximate F ε through an interpolation of the derivative of Ψ ε (t, u), at order 2 by or at order 4 by To complete the procedure, it remains to use the fact that Φ ε (P, Ψ ε (P, u)) = Ψ ε (P, u) and Φ ε (−P, Ψ ε (−P, u)) = Ψ ε (−P, u), a consequence of the stroboscopic property It is worth insisting at this stage that the computation of F ε at point u is regarded as asynchronous in the terminology of Heterogeneous Multiscale Methods. This means here that F ε (u) necessitates the computation of Ψ ε (P, u) and Ψ ε (−P, u) irrespectively of the time t at which we compute the approximation of v ε (t). We finally obtain a numerical method by approximating Φ ε (P, Ψ ε (P, u)) and Φ ε (−P, Ψ ε (−P, u)) by solving the equationsU by a standard one-step method S ε h (hereafter referred to as the micro-integrator) where the step size h used is small enough to resolve one oscillation, i.e. h = P/n with n ∈ N. The outcome of this procedure is the following micro-macro algorithm:

SAM Algorithm
1. Choose a micro-step h = P/n and a macro-step H > 0 and set N = 0.
2. Advance the solution through a standard explicit Runge-Kutta method (hereafter referred to as the macro-integrator) with coefficients (a ij , b j ): where, either for second order interpolation, or for fourth order interpolation. 3. Set N := N + 1 and go to step 2. until N H ≥ T /ε. Note that the algorithm computes a sequence of approximations u N at times t N = N H to the averaged solution of (2.7). For values of t N that coincide with integers multiple of the period P , this actually provides an approximation of u ε (t N ), since u ε (t N ) and Ψ ε (t N , u 0 ) then coincide. If one needs the solution at intermediate points, then it is also possible to obtain it through a kind of post-processing. If , since one has the relation: where we have used with ∆t = t N − kP that, on the one hand owing to the periodicity of Ψ ε w.r.t. t and, on the other hand owing to the group property of Ψ ε . Computing an approximation of u ε (t N ) thus boils down to, first approximatingũ ε kP ≈ Ψ ε (−∆t, u N ) by numerically solving equation (2.7) backward in time from 0 to −∆t and, second approximating Φ ε (∆t, Ψ ε (∆t,ũ ε kP )) by numerically solving equation (1.1) from 0 to ∆t with initial value u 0 replaced byũ ε kP . These are only local-in-time operations which can be done independently of the main SAM algorithm and whose contribution to the global error of approximation is not significant. In the following, we will not elaborate on this aspect of the method, since only stroboscopic times will be used in numerical simulations.
3. Numerical properties of SAM.
3.1. Formal error analysis. In this section, we present an error analysis of SAM. We shall content ourselves with a descriptive analysis, since a more mathematically rigorous derivation of error estimates would unnecessarily burden the presentation, while bringing no surprise: standard arguments for non-stiff equations indeed hold, given that SAM requires to numerically solve the averaged equation (which is by construction non-stiff) and the original highly-oscillatory equation with step size h much smaller than a period (thus getting the equation back into a non-stiff regime). As explained in [8], there are three sources of errors: 1. The approximation of εF ε by a finite-difference formula (2.17) or (2.18): the corresponding error is expressed in terms of ε. 2. The substitution of Ψ ε (± P, u), = 1, 2 in F ε by (S ε ±h ) ·n (u) in F ε h (2.22) or (2.23): the corresponding error is expressed in terms of the micro-step h and ε.
3. The discretization error due to the macro-integrator (2.21): the corresponding error is expressed in terms of the macro-step H. The first source of error contributes to O(ε δ+1 ) where δ is the order of the difference formula (either 2 or 4 in our experiments). The second source of error contributes to O(ε ν h p ) since one solves a non-stiff equation with vector field εf and step size h over a bounded interval 3 . The extra-factor ε ν for ν = 1 accounts for the fact that the micro-integrator is exact for ε = 0. It will turn out that ν = 2 is the effective value observed 4 . These errors are magnified by a factor 1/ε through the stable macro-integration over an interval of length T /ε and lead to an error in u N of size O(ε δ + ε ν−1 h p ). The third source of error accounts for an error term of the form O((εH) q ) where q is the order of the macro-integrator. We recall that εH has the form T /N . Combined together, these three sources of error lead to an error of size 3.2. Geometric behavior. Although the asymptotic averaged field (2.7) inherits geometric properties of the original system (1.1), there is no guarantee that its numerical implementation will also do so. When applied to a Hamiltonian equation such as the Schrödinger equation, the averaged vector field is also Hamiltonian and it may seem desirable for the numerical counterpart to be so. Unfortunately, even if (S h ) n is a symplectic map, the finite-difference approximation is not a geometric transformation and our implementation of SAM is not symplectic. However, timereversibility of the system (1.1), -whenever it holds-, is preserved provided the micro-integrator is itself a symmetric method (this will be the case of the splitting methods used in our experiments). Since in addition, F ε h is computed through symmetric formulas, then it is also the vector field of a time-reversible equation. This property ensures a favorable numerical behavior, as documented for ordinarydifferential equations (see for instance [19]): properties such as the preservation of the Hamitonian or the persistence of KAM-tori are indeed transferred through a symmetric discretization. It is thus in principle not difficult to design a symmetric SAM by choosing a symmetric macro-integrator.
In the sequel, we use two kinds of macro-integrators. For simulations on time intervals of order 1/ε, our aim is to keep SAM as efficient as possible for small values of the parameter ε (i.e. for very high-oscillatory) for which SAM is expected to outperform existing methods, so we use non symmetric but accurate macrointegrators such as fourth-order Runge-Kutta method (RK4). On such intervals of time, the non stiff character of equation (2.7) is enough for the numerical solution to exhibit essentially no drift in the energy and mass. On longer times though (such as 1/ε β with β ≥ 2), we use a (second-order) symmetric method for the macro-integration, which exhibits much better preservation of geometric invariants. 4. Numerical assessment tests. The goal of this section is to confirm by numerical tests the error analysis sketched in Subsection 3.1 and to observe in addition the geometric properties of SAM for long-time integration. We apply SAM to nonlinear Schrödinger equations when the linear evolution induced by the Laplacian is periodic in time, see [10]. We also compare the efficiency of SAM to the one of the time-splitting method.
We consider cubic nonlinear Schrödinger equations of the form in the two following situations: (i) (NLS on the torus) Ω = T 2π = [0, 2π], A = −∂ 2 x with periodic boundary conditions and α(x) = 2 cos(2x); this problem was studied in [18]. We will take initial data which belong to the domain of A: (ii) (Gross-Pitaevskii) Ω = R, A = − 1 2 ∂ 2 x + 1 2 (x 2 − 1) and α(x) ≡ 1. Here also, we will choose initial data in the domain of A: In both cases, the pivot space introduced in Subsection 2.2 is Z = L 2 (Ω). Moreover, the operator A has compact resolvent and its spectrum is Z (NLS on the torus) or N (Gross-Pitaevskii). Hence, (4.26) is under the form of the autonomous equation (2.15) and the operator e −itA ∈ L(X, X) is 2π-periodic in time.
4.1. NLS on the torus: Accuracy and efficiency of SAM. We present here our numerical results in the case (i): As micro-integrator, we adopt a time-splitting spectral method, which has been widely and successfully used in many applications [1,2]. More precisely, we choose the fourth-order time-splitting Fourier spectral method (TSFP4) introduced in [25]. This method is based on the combination of Fourier discretization in space and reversible time-splitting 5 . In practice, the wavefunction is discretized in space by Fourier series as ψ(x) = Nx/2 −Nx/2+1 ψ k e ikx , with N x = 256, such that errors originating from space discretization can be considered as negligible. The time-step for the micro-integrator is denoted by h and is always taken under the form 2π/N , where N ∈ N * . This ensures super-convergence, i.e. (3.25) holds with the exponent ν = 2 in , as proven in [11].
As macro-integrator, we use RK4 scheme with time-step H = T 0 /( N ε) and we approximate the vector field F ε h by using the fourth-order interpolation formula (2.23). From (3.25), we thus expect an asymptotic error estimate of the form (4.29) The final time of the simulation is taken as T 0 /ε with T 0 = π/4, while numerical errors are defined by where ψ n denotes the numerical solution. Finally, we take the value ∆x = 2π/N x for the mesh size. For ε = 2 −9 , 2 −10 (and smaller ε), the reference solution is obtained by SAM with (εH, h) = (π/2 12 , π/2 12 ) using a higher-order interpolation method for the vector field (8th-order interpolation). Our first results concern the accuracy of SAM and confirm estimate (4.29). On the left of Fig. 1, we present the error versus the macro step for H = π/(2 j ε), j = 5, 6, . . . , 11, with small fixed micro step h = π/2 12 The different curves correspond to the values ε = 2 −5 , . . . , 2 −10 . On the right of Fig. 1, we represent the error versus the micro-step, for h = π/2 5 , . . . , π/2 11 , with small fixed macro step H = π/(2 12 ε). Again, the different curves correspond to the values ε = 2 −5 , . . . , 2 −10 . Fourth-order accuracy in εH (with a uniform constant w.r.t. ε) and h (with a linear constant in ε) can be clearly observed. On each curve, a saturation appears, due to the interpolation error. As expected, the level of this saturation error is proportional to ε 4 .  Next, we investigate numerically the accuracy of the TSFP4 method. Table 1 lists errors obtained by TSFP4 with time-step h for different ε. We obtain an error  (4.30) as expected from [11], since the micro time step is a submultiple of the period 2π.
Thirdly, we present efficiency diagrams. In Figure 2, we represent error versus the total number N step of micro TSFP4 steps, for SAM and for TSFP4. Here, the red curves with the 'S' label plot errors of SAM and the dashed blue curves with the 'T' label plot errors of TSFP4. The error of SAM for a fixed number of TSFP4 steps is chosen as the minimal among all possible choices (H j , h k ), i.e. H j = π/4/(2 j ε), h k = π/4/2 k with j + k being fixed. Let us estimate from (4.29) and (4.30) theoretical values of the error versus N step , for both methods. For SAM, we have N step = C0 hHε so that, after a simple optimization on (4.29), one obtains an optimal error (assuming that the best choice for H, h is ensured) under the form For TSFP4, the number of micro-steps is N step = C5 εh . As a consequence, one gets from (4.30) 4 . On the curves in Figure 2, we observe the expected behaviors. The error for SAM is proportional to 1/(N step ) 2 and improves proportionally to √ ε when ε decreases, whereas the error for the TSFP4 method is proportional to 1/(N step ) 4 but is degraded as ε −3 when ε decreases. We can conclude that TSFP4 performs better than SAM when ε is relatively large, e.g. ε > 2 −12 , and that SAM tends to perform better for smaller values of ε. The most striking result concerns the case ε = 2 −18 where the speed-up factor is 100.

NLS on the torus: Long-time behavior.
In this subsection, we focus on the long-time behavior of SAM and thus consider NLS on the torus (4.27), (4.28) with much longer time-intervals, of the form 0 ≤ t ≤ T 0 /ε 2 (see Remark 2.5, where the Hamiltonian is now denoted by E ε ). Here, we are not so much interested by the accuracy of the wavefunction ψ ε itself, but rather by the preservation of mass and energy, two invariants of (4.27) respectively defined by m ε (t) = The number of points for space discretization is taken as N x = 32. For the microintegrator, we use the second-order Strang splitting scheme (TSFP2) with a micro time-step h = 2π/512, ensuring that the CFL condition h(N/2) 2 < 2π is satisfied so as to avoid the effect of resonances over long times. Finally, we use the second-order formula (2.22) for the interpolation of the vector field. As for the macro-integrators, we experiment with two explicit Runge-Kutta methods, namely RK2, RK4 and the implicit midpoint rule, whose Butcher tableaux have the following values: Note that the implicit midpoint rule is a symmetric method of order 2. In our simulations, ε is given the value 2 −11 ≈ 5 × 10 −4 and the final integration time is now taken as T 0 /ε 2 with T 0 = π/4. On Figure 3, we plot in logarithmic scale the time evolution of the errors on mass (left) and energy (right), for the three macrointegrators considered. The macro time-step is H = π/(2 7 ε). For the midpoint scheme, as expected, we observe a very good conservation of mass (error smaller than 2 × 10 −11 ) and energy (error smaller than 10 −9 ), with no drift over this very long time interval. In contrast, the two non-symmetric schemes RK2 and RK4 display a linear drift in time: at the end of the simulation the errors on mass and energy are of order 10 −2 for RK2 and of order 10 −7 for RK4. These numerical results corroborate the statements of Subsection 3.2. Besides, we investigate longer time performance of SAM with ε = 2 −8 and the final integration time is taken as T 0 /ε 3 . In Figure 4, we plot in logarithmic scale the time evolution of the errors on mass and energy by SAM that is implemented with implicit midpoint scheme as macro-integrator, Strang splitting method as microintegrator and fourth order interpolation of the vector field. The macro time-step is H = π/(2 7 ε). In this subsection, we present our numerical results in the case (ii), namely NLS with a harmonic oscillator on the whole space, The micro-integrator is the fourth-order time-splitting Hermite spectral method 6 (TSHP4) [3,24]. The wavefunction is discretized in space by Hermite series as N), given explicitly in [22], is the (k + 1)-th eigenfunction of harmonic oscillator L = − 1 2 ∂ xx + 1 2 (x 2 − 1) and is such that Lh k (x) = kh k (x). In practice, we take N = 79. The macro-integrator for SAM is the RK4 scheme and the fourth-order interpolation formula is used for the approximation of the vector field. The initial datum is ψ 0 (x) = h 0 (x) + h 1 (x) and the final integration time is T 0 /ε with T 0 = 2π. For ε = 2 −3 , . . . , 2 −7 , the reference solution ψ ref is obtained by the TSHP4 method with a time-step h = π/10 3 , while for ε = 2 −8 , 2 −9 , 2 −10 (and smaller ε), the reference solution is obtained by SAM with H = π/(2 10 ε), h = π/2 10 and using 8th-order interpolation. As above, numerical errors are computed as discrete 2 norm of the wavefunction where ψ n is the numerical solution and the coefficients ω j are the rescaled Gauss-Hermite quadrature weights [22]. In Figure 5, we present accuracy diagrams of the error versus the macro step H (for different macro steps H = π/(2 j ε), j = 3, 4, . . . , 11 with fixed micro steps h = π/2 12 , and diagrams for the error versus the micro step h = π/2 5 , . . . , π/2 9 with fixed macro steps H = π/(2 12 ε). The values ε = 2 −5 , . . . , 2 −9 are tested. As in the case of NLS on the torus, these curves corroborate the error estimate (4.29).
In Table 2, we list the errors for the TSHP4 method as functions of ε and the micro step h. Again, our results also confirm the estimate error TSHP4 ≈ C 4 εh 4 . (4.33) 5. Qualitative illustration of energy transfers. In this section, we illustrate how SAM can be used to explore the qualitative behavior of highly-oscillatory systems. In the context of nonlinear Schrödinger equations of the form (4.26), the   dynamics of the coefficients (or modes) of the solution -when expanded on the basis of eigenfunctions of A-exhibit interesting phenomena: The nonlinearity in (4.26) indeed induces coupling effects and energy transfers between modes, that we wish to visualize. As a first example, the 1D NLS equation on the torus studied in Subsection 4.1 was already simulated in [10] by SAM and other averaging techniques. Here, we simulate three other models: Gross-Pitaevskii equation in one dimension and two models in two dimensions derived in the context of anisotropic confinement. Let us briefly describe the phenomenology. The essential assumption here is that the unbounded operator A has a compact resolvent and that the linear group e −itA is periodic in time. We recall that, if ψ ε solves (4.26), then u ε = e itA ψ ε satisfies the filtered equation This equation is of the form (1.1) and, up to exponentially small remainder terms, we have u ε (t) = Φ ε (t, Ψ ε (t, ψ 0 )), see [10]. Recall that the change of variable u → Φ ε (t, u) is P -periodic in time and satisfies Ψ ε (kP, u) = u for all k ∈ N. Over the typical long time T /ε of a simulation, it accounts for the high oscillations that one may wish to avoid. In this situation, it is more appropriate to represent only the function Ψ ε (t, ψ 0 ) which, as solution to an autonomous equation (2.7), is slowly varying and gives access to the long time nonlinear dynamics. SAM was precisely designed with this objective. After projection on the eigenmodes of A (in practice, Fourier or Hermite modes), (4.26) takes the equivalent form of an infinite system of coupled ODEs. Denote respectively by (λ k ) k∈I , (e k (x)) k∈I , where I is the set of indices (Z or N in applications), the eigenvalues and eigenfunctions of A. If solves (4.26), then each mode ψ k satisfies the equation Introducing the unknown ξ k (t) = e itλ k ψ k (t), we obtain the filtered system In the sequel, we plot and compare the evolution of the moduli |ψ k (t)| = |ξ k (t)| of the modes obtained by simulating three models: • Here, the macro solver is RK4, the micro solver is TSFP4/TSHP4 and the fourth-order interpolation formula is used for the approximation of the vector field. We recall that -up to numerical errors-SAM and TSFP4/TSHP4 will coincide at the stroboscopic times t = kP , k ∈ N. • (FAM) The first-order averaged model obtained by replacing F ε in (2.7) by its first term in its expansion F ε (u) = F 1 (u) + εF 2 (u) + . . ., see Remark 2.3. This model reads and the corresponding system on the modes ξ k reads . The numerical method for the approximation of the FAM is in fact directly constructed from (5.37): this ODE is discretized by the RK4 scheme, where the integral defining the vector field is computed by the rectangle quadrature formula (which is spectrally convergent since the integrand is P -periodic). For each integral discretization, we use 64 points.
5.1. Gross-Pitaevskii in dimension one. We consider here the one-dimensional equation (4.31), (4.32) with ε = 10 −4 . The wave function is approximated by Hermite pseudospectral series with N x = 81. Figure 6 plots the evolution of the absolute value of the Hermite coefficients by TSHP4, SAM and FAM. Numerical parameters are taken as follows: The TSHP4 method is applied with h = 2π 10 3 , SAM with (H, h) = ( 2π 10 4 ε , 2π 400 ) and FAM with h = 2π 10 4 . We observe an interesting energy cascade phenomenon, very similar to the case of NLS on the torus [0, 2π]×[0, 2π] studied in [9] (see also [10] for numerical simulations by SAM). While the modes greater than 1 are equal to zero initially, they grow and become significantly large in a characteristic time that depends on the mode. The point is that, as long as a mode is of order O(ε), it is highly-oscillatory and its observation by TSFP4 is not convenient on the first diagram ( Fig. 6 top left). Instead, as we said in the introduction of this section, SAM cleans up the oscillations associated to the change of variable Φ ε and it is finally easier to observe the energy cascade on the second diagram ( Fig. 6 top right). The curves represented on third diagram (FAM) are also very smooth, but show a dynamics that is correct only when modes have reached a value above a threshold O(ε): under this level, the dynamics given by FAM is inaccurate.

2D NLS on an anisotropic torus.
We consider the cubic NLS equation on a very thin bidimensional torus: Here, the dimensionless parameter ε is the ratio between the typical length-scales in directions y and x. Let us rescale the variables, setting (t, x, y) = (ε 2 t , x , εy ) (then omitting the primes for simplicity in the new equation). The space domain ] is now independent of ε and we obtain This equation is still of the form (2.15). However, the nonlinearity g is now unbounded in any Sobolev space. In such case, the exponential estimates proved in [10] are not valid. One can nevertheless expect polynomial estimates for smooth initial data, which are enough to give foundations to SAM. As initial data for (5.39), we take ψ 0 (x, y) = 1+2 cos(x)+2 cos(y). The numerical parameters of our simulations are the following. The wave function is approximated by Fourier series with N x = 128 and N y = 128. The TSFP4 method is applied with a time step h = 2π 500 , SAM is applied with a macro step H = 2π 10 4 ε 2 and a micro step h = 2π 400 and the first averaged model (FAM) is computed with the time step h = 2π 10 4 . In Fig. 7, we plot the evolution of the Fourier coefficients | ψ n,0 | and | ψ 0,n | with n = 0, 1, . . . , 7, obtained by the three different methods, for ε = 0.01 and t ∈ [0, 2π/ε 2 ].
The following observations can be made. The three diagrams on the left are very similar, indicating that the first averaged model FAM (which is cheaper) is sufficient to describe correctly the evolution of modes | ψ n,0 |. However, on the three diagrams of the right, it can be inferred that FAM is not able to capture the dynamics of higher modes k y ≥ 2. Some interesting phenomena appearing on these higher modes in y can only be observed by SAM ( on TSFP4, these modes are highly-oscillatory): for instance, the modes seem to arranged by pairs as

2D anisotropic Gross-Pitaevskii equation.
We consider here the Gross-Pitaevskii equation with a strongly anisotropic harmonic confinement potential: i∂ t ψ ε = − 1 2 ∆ψ ε + 1 2 x 2 + y 2 ε 4 ψ ε + β|ψ ε | 2 ψ ε , 0 < t ≤ T 0 , (x, y) ∈ R 2 , ψ ε (0, x, y) = ψ 0 (x, y/ε), (x, y) ∈ R 2 , see [4] for the physical context. In order to rewrite this equation in our framework, the adequate change of variable is again (t, x, y) = (ε 2 t , x , εy ), which yields the following model on the new unknown (still denoted ψ ε (t, x, y)): ψ ε (0, x, y) = ψ 0 (x, y), (x, y) ∈ R 2 . (5.42) Our initial data is ψ 0 (x, y) = h 0 (y)(h 0 (x) + h 2 (x)), where h k are the Hermite functions (see Subsection 4.3). The wave function is approximated by Hermite pseudospectral series with N x = N y = 81. In the computations, TSHP4 is applied with a time step h = P 100 , SAM is used with (H, h) = ( P 10 3 ε 2 , P 100 ) and FAM is used with the time step h = P 10 3 . On Figure 8, we plot the evolution of Hermite modes for ε = 10 −2 and β = 5. Similar comments as for 2D NLS on an anisotropic torus can be done. For a description of modes 0 in y, FAM is sufficient. Interesting dynamics can be observed on higher modes in y, and SAM is the more appropriate model for the investigation of these dynamics, since it filters out all the oscillations.