• Previous Article
    EmT: Locating empty territories of homology group generators in a dataset
  • FoDS Home
  • This Issue
  • Next Article
    Levels and trends in the sex ratio at birth and missing female births for 29 states and union territories in India 1990–2016: A Bayesian modeling study
June  2019, 1(2): 197-225. doi: 10.3934/fods.2019009

On adaptive estimation for dynamic Bernoulli bandits

Department of Mathematics, Imperial College London, London, SW7 2AZ, UK

* Corresponding author: Nikolas Kantas

Published  June 2019

The multi-armed bandit (MAB) problem is a classic example of the exploration-exploitation dilemma. It is concerned with maximising the total rewards for a gambler by sequentially pulling an arm from a multi-armed slot machine where each arm is associated with a reward distribution. In static MABs, the reward distributions do not change over time, while in dynamic MABs, each arm's reward distribution can change, and the optimal arm can switch over time. Motivated by many real applications where rewards are binary, we focus on dynamic Bernoulli bandits. Standard methods like $ \epsilon $-Greedy and Upper Confidence Bound (UCB), which rely on the sample mean estimator, often fail to track changes in the underlying reward for dynamic problems. In this paper, we overcome the shortcoming of slow response to change by deploying adaptive estimation in the standard methods and propose a new family of algorithms, which are adaptive versions of $ \epsilon $-Greedy, UCB, and Thompson sampling. These new methods are simple and easy to implement. Moreover, they do not require any prior knowledge about the dynamic reward process, which is important for real applications. We examine the new algorithms numerically in different scenarios and the results show solid improvements of our algorithms in dynamic environments.

Citation: Xue Lu, Niall Adams, Nikolas Kantas. On adaptive estimation for dynamic Bernoulli bandits. Foundations of Data Science, 2019, 1 (2) : 197-225. doi: 10.3934/fods.2019009
References:
[1]

C. AnagnostopoulosD. K. TasoulisN. M. AdamsN. G. Pavlidis and D. J. Hand, Online linear and quadratic discriminant analysis with adaptive forgetting for streaming classification, Statistical Analysis and Data Mining: The ASA Data Science Journal, 5 (2012), 139-166.  doi: 10.1002/sam.10151.

[2]

P. AuerN. Cesa-Bianchi and P. Fischer, Finite-time analysis of the multiarmed bandit problem, Machine Learning, 47 (2002), 235-256. 

[3]

P. AuerN. Cesa-BianchiY. Freund and R. E. Schapire, The non-stochastic multi-armed bandit problem, SIAM Journal on Computing, 32 (2002), 48-77.  doi: 10.1137/S0097539701398375.

[4]

B. Awerbuch and R. Kleinberg, Online linear optimization and adaptive routing, Journal of Computer and System Sciences, 74 (2008), 97-114.  doi: 10.1016/j.jcss.2007.04.016.

[5]

D. A. Bodenham and N. M. Adams, Continuous monitoring for changepoints in data streams using adaptive estimation, Statistics and Computing, 27 (2017), 1257-1270.  doi: 10.1007/s11222-016-9684-8.

[6]

E. Brochu, M. D. Hoffman and N. de Freitas, Portfolio allocation for Bayesian optimization, preprint, arXiv: 1009.5419v2.

[7]

O. Chapelle and L. Li, An empirical evaluation of Thompson sampling, in Advances in Neural Information Processing Systems 24, Curran Associates, Inc., (2011), 2249–2257.

[8]

A. Garivier and O. Cappe, The KL-UCB algorithm for bounded stochastic bandits and beyond, in Proceedings of the 24th Annual Conference on Learning Theory, vol. 19 of PMLR, (2011), 359–376.

[9]

A. Garivier and E. Moulines, On upper-confidence bound policies for switching bandit problems, in Algorithmic Learning Theory, vol. 6925 of Lecture Notes in Artificial Intelligence, Springer-Verlag Berlin, (2011), 174–188. doi: 10.1007/978-3-642-24412-4_16.

[10]

P. W. Glynn and D. Ormoneit, Hoeffding's inequality for uniformly ergodic Markov chains, Statistics and Probability Letters, 56 (2002), 143-146.  doi: 10.1016/S0167-7152(01)00158-4.

[11]

O.-C. Granmo and S. Berg, Solving non-stationary bandit problems by random sampling from sibling Kalman filters, in Proceedings of Trends in Applied Intelligent Systems, PT III, vol. 6098 of Lecture Notes in Artificial Intelligence, Springer-Verlag Berlin, (2010), 199–208.

[12]

N. Gupta, O.-C. Granmo and A. Agrawala, Thompson sampling for dynamic multi-armed bandits, in Proceedings of the 10th International Conference on Machine Learning and Applications and Workshops, (2011), 484–489.

[13]

S. S. Haykin, Adaptive Filter Theory, 4th edition, Prentice-Hall, Upper Saddle River, N.J., 2002.

[14]

W. Hoeffding, Probability inequalities for sums of bounded random variables, Journal of the American Statistical Association, 58 (1963), 13-30. 

[15]

L. Kocsis and C. Szepesvari, Discounted UCB, in 2nd PASCAL Challenges Workshop, Venice, 2006. Available from: https://www.lri.fr/ sebag/Slides/Venice/Kocsis.pdf.

[16]

D. E. Koulouriotis and A. Xanthopoulos, Reinforcement learning and evolutionary algorithms for non-stationary multi-armed bandit problems, Applied Mathematics and Computation, 196 (2008), 913-922. 

[17]

V. Kuleshov and D. Precup, Algorithms for the multi-armed bandit problem, preprint, arXiv: 1402.6028v1.

[18]

J. Langford and T. Zhang, The epoch-greedy algorithm for multi-armed bandits with side information, in Advances in Neural Information Processing Systems 20, Curran Associates, Inc., (2008), 817–824.

[19]

N. Levine, K. Crammer and S. Mannor, Rotting bandits, in Advances in Neural Information Processing Systems 30, Curran Associates, Inc., (2017) 3074–3083.

[20]

L. Li, W. Chu, J. Langford and R. E. Schapire, A contextual-bandit approach to personalized news article recommendation, in Proceedings of the 19th International Conference on World Wide Web, ACM, (2010), 661–670.

[21]

B. C. MayN. KordaA. Lee and D. S. Leslie, Optimistic Bayesian sampling in contextual-bandit problems, The Journal of Machine Learning Research, 13 (2012), 2069-2106. 

[22]

C. H. Papadimitriou and J. N. Tsitsiklis, The complexity of optimal queuing network control, Mathematics of Operations Research, 24 (1999), 293-305.  doi: 10.1287/moor.24.2.293.

[23]

W. H. Press, Bandit solutions provide unified ethical models for randomized clinical trials and comparative effectiveness research, Proceedings of the National Academy of Sciences of the United States of America, 106 (2009), 22387-22392. 

[24]

V. Raj and S. Kalyani, Taming non-stationary bandits: A Bayesian approach, arXiv: 1707.09727.

[25]

H. Robbins, Some aspects of the sequential design of experiments, Bulletin of the American Mathematical Society, 58 (1952), 527-535.  doi: 10.1090/S0002-9904-1952-09620-8.

[26]

S. W. Roberts, Control chart tests based on geometric moving averages, Technometrics, 1 (1959), 239-250. 

[27]

S. L. Scott, A modern Bayesian look at the multi-armed bandit, Applied Stochastic Models in Business and Industry, 26 (2010), 639-658.  doi: 10.1002/asmb.874.

[28]

S. L. Scott, Multi-armed bandit experiments in the online service economy, Applied Stochastic Models in Business and Industry, 31 (2015), 37-45.  doi: 10.1002/asmb.2104.

[29]

W. Shen, J. Wang, Y.-G. Jiang and H. Zha, Portfolio choices with orthogonal bandit learning, in Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, (2015), 974–980.

[30]

A. Slivkins and E. Upfal, Adapting to a changing environment: The Brownian restless bandits, in 21st Conference on Learning Theory, (2008), 343–354.

[31]

W. R. Thompson, On the likelihood that one unknown probability exceeds another in view of the evidence of two samples, Biometrika, 25 (1933), 285-294. 

[32]

F. Tsung and K. Wang, Adaptive charting techniques: Literature review and extensions, in Frontiers in Statistical Quality Control 9, Physica-Verlag HD, Heidelberg, (2010), 19–35,.

[33]

J. Vermorel and M. Mohri, Multi-armed bandit algorithms and empirical evaluation, in Proceedings of the 16th European Conference on Machine Learning, vol. 3720 of Lecture Notes in Computer Science, Springer, Berlin, (2005), 437–448.

[34]

S. S. VillarJ. Bowden and J. Wason, Multi-armed bandit models for the optimal design of clinical trials: Benefits and challenges, Statistical Science, 30 (2015), 199-215.  doi: 10.1214/14-STS504.

[35]

C. J. C. H. Watkins, Learning from Delayed Rewards, Ph.D thesis, Cambridge University, 1989.

[36]

P. Whittle, Restless bandits: Activity allocation in a changing world, Journal of Applied Probability, 25 (1988), 287-298.  doi: 10.1017/s0021900200040420.

[37]

J. Y. Yu and S. Mannor, Piecewise-stationary bandit problems with side observations, in Proceedings of the 26th International Conference on Machine Learning, (2009), 1177–1184.

show all references

References:
[1]

C. AnagnostopoulosD. K. TasoulisN. M. AdamsN. G. Pavlidis and D. J. Hand, Online linear and quadratic discriminant analysis with adaptive forgetting for streaming classification, Statistical Analysis and Data Mining: The ASA Data Science Journal, 5 (2012), 139-166.  doi: 10.1002/sam.10151.

[2]

P. AuerN. Cesa-Bianchi and P. Fischer, Finite-time analysis of the multiarmed bandit problem, Machine Learning, 47 (2002), 235-256. 

[3]

P. AuerN. Cesa-BianchiY. Freund and R. E. Schapire, The non-stochastic multi-armed bandit problem, SIAM Journal on Computing, 32 (2002), 48-77.  doi: 10.1137/S0097539701398375.

[4]

B. Awerbuch and R. Kleinberg, Online linear optimization and adaptive routing, Journal of Computer and System Sciences, 74 (2008), 97-114.  doi: 10.1016/j.jcss.2007.04.016.

[5]

D. A. Bodenham and N. M. Adams, Continuous monitoring for changepoints in data streams using adaptive estimation, Statistics and Computing, 27 (2017), 1257-1270.  doi: 10.1007/s11222-016-9684-8.

[6]

E. Brochu, M. D. Hoffman and N. de Freitas, Portfolio allocation for Bayesian optimization, preprint, arXiv: 1009.5419v2.

[7]

O. Chapelle and L. Li, An empirical evaluation of Thompson sampling, in Advances in Neural Information Processing Systems 24, Curran Associates, Inc., (2011), 2249–2257.

[8]

A. Garivier and O. Cappe, The KL-UCB algorithm for bounded stochastic bandits and beyond, in Proceedings of the 24th Annual Conference on Learning Theory, vol. 19 of PMLR, (2011), 359–376.

[9]

A. Garivier and E. Moulines, On upper-confidence bound policies for switching bandit problems, in Algorithmic Learning Theory, vol. 6925 of Lecture Notes in Artificial Intelligence, Springer-Verlag Berlin, (2011), 174–188. doi: 10.1007/978-3-642-24412-4_16.

[10]

P. W. Glynn and D. Ormoneit, Hoeffding's inequality for uniformly ergodic Markov chains, Statistics and Probability Letters, 56 (2002), 143-146.  doi: 10.1016/S0167-7152(01)00158-4.

[11]

O.-C. Granmo and S. Berg, Solving non-stationary bandit problems by random sampling from sibling Kalman filters, in Proceedings of Trends in Applied Intelligent Systems, PT III, vol. 6098 of Lecture Notes in Artificial Intelligence, Springer-Verlag Berlin, (2010), 199–208.

[12]

N. Gupta, O.-C. Granmo and A. Agrawala, Thompson sampling for dynamic multi-armed bandits, in Proceedings of the 10th International Conference on Machine Learning and Applications and Workshops, (2011), 484–489.

[13]

S. S. Haykin, Adaptive Filter Theory, 4th edition, Prentice-Hall, Upper Saddle River, N.J., 2002.

[14]

W. Hoeffding, Probability inequalities for sums of bounded random variables, Journal of the American Statistical Association, 58 (1963), 13-30. 

[15]

L. Kocsis and C. Szepesvari, Discounted UCB, in 2nd PASCAL Challenges Workshop, Venice, 2006. Available from: https://www.lri.fr/ sebag/Slides/Venice/Kocsis.pdf.

[16]

D. E. Koulouriotis and A. Xanthopoulos, Reinforcement learning and evolutionary algorithms for non-stationary multi-armed bandit problems, Applied Mathematics and Computation, 196 (2008), 913-922. 

[17]

V. Kuleshov and D. Precup, Algorithms for the multi-armed bandit problem, preprint, arXiv: 1402.6028v1.

[18]

J. Langford and T. Zhang, The epoch-greedy algorithm for multi-armed bandits with side information, in Advances in Neural Information Processing Systems 20, Curran Associates, Inc., (2008), 817–824.

[19]

N. Levine, K. Crammer and S. Mannor, Rotting bandits, in Advances in Neural Information Processing Systems 30, Curran Associates, Inc., (2017) 3074–3083.

[20]

L. Li, W. Chu, J. Langford and R. E. Schapire, A contextual-bandit approach to personalized news article recommendation, in Proceedings of the 19th International Conference on World Wide Web, ACM, (2010), 661–670.

[21]

B. C. MayN. KordaA. Lee and D. S. Leslie, Optimistic Bayesian sampling in contextual-bandit problems, The Journal of Machine Learning Research, 13 (2012), 2069-2106. 

[22]

C. H. Papadimitriou and J. N. Tsitsiklis, The complexity of optimal queuing network control, Mathematics of Operations Research, 24 (1999), 293-305.  doi: 10.1287/moor.24.2.293.

[23]

W. H. Press, Bandit solutions provide unified ethical models for randomized clinical trials and comparative effectiveness research, Proceedings of the National Academy of Sciences of the United States of America, 106 (2009), 22387-22392. 

[24]

V. Raj and S. Kalyani, Taming non-stationary bandits: A Bayesian approach, arXiv: 1707.09727.

[25]

H. Robbins, Some aspects of the sequential design of experiments, Bulletin of the American Mathematical Society, 58 (1952), 527-535.  doi: 10.1090/S0002-9904-1952-09620-8.

[26]

S. W. Roberts, Control chart tests based on geometric moving averages, Technometrics, 1 (1959), 239-250. 

[27]

S. L. Scott, A modern Bayesian look at the multi-armed bandit, Applied Stochastic Models in Business and Industry, 26 (2010), 639-658.  doi: 10.1002/asmb.874.

[28]

S. L. Scott, Multi-armed bandit experiments in the online service economy, Applied Stochastic Models in Business and Industry, 31 (2015), 37-45.  doi: 10.1002/asmb.2104.

[29]

W. Shen, J. Wang, Y.-G. Jiang and H. Zha, Portfolio choices with orthogonal bandit learning, in Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, (2015), 974–980.

[30]

A. Slivkins and E. Upfal, Adapting to a changing environment: The Brownian restless bandits, in 21st Conference on Learning Theory, (2008), 343–354.

[31]

W. R. Thompson, On the likelihood that one unknown probability exceeds another in view of the evidence of two samples, Biometrika, 25 (1933), 285-294. 

[32]

F. Tsung and K. Wang, Adaptive charting techniques: Literature review and extensions, in Frontiers in Statistical Quality Control 9, Physica-Verlag HD, Heidelberg, (2010), 19–35,.

[33]

J. Vermorel and M. Mohri, Multi-armed bandit algorithms and empirical evaluation, in Proceedings of the 16th European Conference on Machine Learning, vol. 3720 of Lecture Notes in Computer Science, Springer, Berlin, (2005), 437–448.

[34]

S. S. VillarJ. Bowden and J. Wason, Multi-armed bandit models for the optimal design of clinical trials: Benefits and challenges, Statistical Science, 30 (2015), 199-215.  doi: 10.1214/14-STS504.

[35]

C. J. C. H. Watkins, Learning from Delayed Rewards, Ph.D thesis, Cambridge University, 1989.

[36]

P. Whittle, Restless bandits: Activity allocation in a changing world, Journal of Applied Probability, 25 (1988), 287-298.  doi: 10.1017/s0021900200040420.

[37]

J. Y. Yu and S. Mannor, Piecewise-stationary bandit problems with side observations, in Proceedings of the 26th International Conference on Machine Learning, (2009), 1177–1184.

Figure 1.  Illustration of the difference between tuning $ d $ in AFF-$ d $-Greedy and tuning $ \epsilon $ in `adaptive estimation $ \epsilon $-Greedy'. The step size $ \eta = 0.01 $
Figure 2.  Performance of different algorithms in the case of small number of changes
Figure 3.  Abruptly changing scenario (Case 1): examples of $ \mu_{t} $ sampled from the model in (28) with parameters of Case 1 displayed in Table 1
Figure 4.  Abruptly changing scenario (Case 2): examples of $ \mu_{t} $ sampled from the model in (28) with parameters of Casek 2 displayed in Table 1
Figure 5.  Results for the two-armed Bernoulli bandit with abruptly changing expected rewards. The top row displays the cumulative regret over time; results are averaged over 100 replications. The bottom row are boxplots of total regret at time $ t = 10,000 $. Trajectories are sampled from (28) with parameters displayed in Table 1
Figure 6.  Drifting scenario (Case 3): examples of $ \mu_t $ simulated from the model in (29) with $ \sigma^{2}_{\mu} = 0.0001 $
Figure 7.  Drifting scenario (Case 4): examples of $ \mu_t $ simulated from the model in (30) with $ \sigma^{2}_{\mu} = 0.001 $
Figure 8.  Results for the two-armed Bernoulli bandit with drifting expected rewards. The top row displays the cumulative regret over time; results are averaged over 100 independent replications. The bottom row are boxplots of total regret at time $ t = 10,000 $. Trajectories for Case 3 are sampled from (29) with $ \sigma^{2}_{\mu} = 0.0001 $, and trajectories for Case 4 are sampled from (30) with $ \sigma^{2}_{\mu} = 0.001 $
Figure 9.  Large number of arms: abruptly changing environment (Case 1)
Figure 10.  Large number of arms: abruptly changing environment (Case 2)
Figure 11.  Large number of arms: drifting environment (Case 3)
Figure 12.  Large number of arms: drifting environment (Case 4)
Figure 13.  AFF-$ d $-Greedy algorithm with different $ \eta $ values. $ \eta_{1} = 0.0001, \eta_{2} = 0.001 $, $ \eta_{3} = 0.01 $, and $ \eta_{4}(t) = 0.0001/s^{2}_{t} $, where $ s^{2}_{t} $ is as in (11)
Figure 14.  AFF versions of UCB algorithm with different $ \eta $ values. $ \eta_{1} = 0.0001 $, $ \eta_{2} = 0.001 $, $ \eta_{3} = 0.01 $, and $ \eta_{4}(t) = 0.0001/s^{2}_{t} $, where $ s^{2}_{t} $ is as in (11)
Figure 15.  AFF versions of TS algorithm with different $ \eta $ values. $ \eta_{1} = 0.0001 $, $ \eta_{2} = 0.001 $, $ \eta_{3} = 0.01 $, and $ \eta_{4}(t) = 0.0001/s^{2}_{t} $, where $ s^{2}_{t} $ is as in (11)
Figure 16.  D-UCB and SW-UCB algorithms with different values of key parameters
Figure 17.  Boxplot of total regret for algorithms DTS, AFF-DTS1, and AFF-DTS2. Acronym like DTS-C5 represents the DTS algorithm with parameter $ C = 5 $. Similarly acronym like AFF-DTS1-C5 represents the AFF-DTS1 algorithm with initial value $ C_{0} = 5 $. The result of AFF-OTS is plotted as a benchmark
Table 1.  Parameters used in the exponential clock model shown in (28)
Case 1 Case 2
$ \theta $ $ r_{l} $ $ r_{u} $ $ \theta $ $ r_{l} $ $ r_{u} $
Arm 1 0.001 0.0 1.0 0.001 0.3 1.0
Arm 2 0.010 0.0 1.0 0.010 0.0 0.7
Case 1 Case 2
$ \theta $ $ r_{l} $ $ r_{u} $ $ \theta $ $ r_{l} $ $ r_{u} $
Arm 1 0.001 0.0 1.0 0.001 0.3 1.0
Arm 2 0.010 0.0 1.0 0.010 0.0 0.7
[1]

Aku Kammonen, Jonas Kiessling, Petr Plecháč, Mattias Sandberg, Anders Szepessy. Adaptive random Fourier features with Metropolis sampling. Foundations of Data Science, 2020, 2 (3) : 309-332. doi: 10.3934/fods.2020014

[2]

Esmail Abdul Fattah, Janet Van Niekerk, Håvard Rue. Smart Gradient - An adaptive technique for improving gradient estimation. Foundations of Data Science, 2022, 4 (1) : 123-136. doi: 10.3934/fods.2021037

[3]

Tamar Friedlander, Naama Brenner. Adaptive response and enlargement of dynamic range. Mathematical Biosciences & Engineering, 2011, 8 (2) : 515-528. doi: 10.3934/mbe.2011.8.515

[4]

Christopher Rackauckas, Qing Nie. Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory. Discrete and Continuous Dynamical Systems - B, 2017, 22 (7) : 2731-2761. doi: 10.3934/dcdsb.2017133

[5]

Bernadette N. Hahn, Melina-Loren Kienle Garrido, Christian Klingenberg, Sandra Warnecke. Using the Navier-Cauchy equation for motion estimation in dynamic imaging. Inverse Problems and Imaging, 2022, 16 (5) : 1179-1198. doi: 10.3934/ipi.2022018

[6]

Tengfei Yan, Qunying Liu, Bowen Dou, Qing Li, Bowen Li. An adaptive dynamic programming method for torque ripple minimization of PMSM. Journal of Industrial and Management Optimization, 2021, 17 (2) : 827-839. doi: 10.3934/jimo.2019136

[7]

Dongho Kim, Eun-Jae Park. Adaptive Crank-Nicolson methods with dynamic finite-element spaces for parabolic problems. Discrete and Continuous Dynamical Systems - B, 2008, 10 (4) : 873-886. doi: 10.3934/dcdsb.2008.10.873

[8]

Vladimir Djordjevic, Vladimir Stojanovic, Hongfeng Tao, Xiaona Song, Shuping He, Weinan Gao. Data-driven control of hydraulic servo actuator based on adaptive dynamic programming. Discrete and Continuous Dynamical Systems - S, 2022, 15 (7) : 1633-1650. doi: 10.3934/dcdss.2021145

[9]

Mehmet Onur Olgun, Osman Palanci, Sirma Zeynep Alparslan Gök. On the grey Baker-Thompson rule. Journal of Dynamics and Games, 2020, 7 (4) : 303-315. doi: 10.3934/jdg.2020024

[10]

Keaton Hamm, Longxiu Huang. Stability of sampling for CUR decompositions. Foundations of Data Science, 2020, 2 (2) : 83-99. doi: 10.3934/fods.2020006

[11]

Omri M. Sarig. Bernoulli equilibrium states for surface diffeomorphisms. Journal of Modern Dynamics, 2011, 5 (3) : 593-608. doi: 10.3934/jmd.2011.5.593

[12]

Takao Komatsu, Bijan Kumar Patel, Claudio Pita-Ruiz. Several formulas for Bernoulli numbers and polynomials. Advances in Mathematics of Communications, 2021  doi: 10.3934/amc.2021006

[13]

Matthew Nicol. Induced maps of hyperbolic Bernoulli systems. Discrete and Continuous Dynamical Systems, 2001, 7 (1) : 147-154. doi: 10.3934/dcds.2001.7.147

[14]

Hajnal R. Tóth. Infinite Bernoulli convolutions with different probabilities. Discrete and Continuous Dynamical Systems, 2008, 21 (2) : 595-600. doi: 10.3934/dcds.2008.21.595

[15]

Alexandre J. Chorin, Fei Lu, Robert N. Miller, Matthias Morzfeld, Xuemin Tu. Sampling, feasibility, and priors in data assimilation. Discrete and Continuous Dynamical Systems, 2016, 36 (8) : 4227-4246. doi: 10.3934/dcds.2016.36.4227

[16]

Shixu Meng. A sampling type method in an electromagnetic waveguide. Inverse Problems and Imaging, 2021, 15 (4) : 745-762. doi: 10.3934/ipi.2021012

[17]

Kamil Rajdl, Petr Lansky. Fano factor estimation. Mathematical Biosciences & Engineering, 2014, 11 (1) : 105-123. doi: 10.3934/mbe.2014.11.105

[18]

Arthur Henrique Caixeta, Irena Lasiecka, Valéria Neves Domingos Cavalcanti. On long time behavior of Moore-Gibson-Thompson equation with molecular relaxation. Evolution Equations and Control Theory, 2016, 5 (4) : 661-676. doi: 10.3934/eect.2016024

[19]

Wenhui Chen, Alessandro Palmieri. Nonexistence of global solutions for the semilinear Moore – Gibson – Thompson equation in the conservative case. Discrete and Continuous Dynamical Systems, 2020, 40 (9) : 5513-5540. doi: 10.3934/dcds.2020236

[20]

Luciano Abadías, Carlos Lizama, Marina Murillo-Arcila. Hölder regularity for the Moore-Gibson-Thompson equation with infinite delay. Communications on Pure and Applied Analysis, 2018, 17 (1) : 243-265. doi: 10.3934/cpaa.2018015

 Impact Factor: 

Metrics

  • PDF downloads (347)
  • HTML views (1355)
  • Cited by (1)

Other articles
by authors

[Back to Top]