Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-28T01:53:02.117Z Has data issue: false hasContentIssue false

Limits and limitations of no-regret learning in games

Published online by Cambridge University Press:  13 October 2017

Barnabé Monnot
Affiliation:
Singapore University of Technology and Design, Engineering Systems & Design Pillar, 8 Somapah Road, Singapore 487372 e-mail: [email protected], [email protected]
Georgios Piliouras
Affiliation:
Singapore University of Technology and Design, Engineering Systems & Design Pillar, 8 Somapah Road, Singapore 487372 e-mail: [email protected], [email protected]

Abstract

We study the limit behavior and performance of no-regret dynamics in general game theoretic settings. We design protocols that achieve both good regret and equilibration guarantees in general games. We also establish a strong equivalence between them and coarse correlated equilibria (CCE). We examine structured game settings where stronger properties can be established for no-regret dynamics and CCE. In congestion games with non-atomic agents (each contributing a fraction of the flow), as we decrease the individual flow of agents, CCE become closely concentrated around the unique equilibrium flow of the non-atomic game. Moreover, we compare best/worst case no-regret learning behavior to best/worst case Nash equilibrium (NE) in small games. We prove analytical bounds on these inefficiency ratios for 2×2 games and unboundedness for larger games. Experimentally, we sample normal form games and compute their measures of inefficiency. We show that the ratio distribution has sharp decay, in the sense that most generated games have small ratios. They also exhibit strong anti-correlation between each other, that is games with large improvements from the best NE to the best CCE present small degradation from the worst NE to the worst CCE.

Type
Adaptive and Learning Agents
Copyright
© Cambridge University Press, 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anshelevich, E., Dasgupta, A., Kleinberg, J., Tardos, É., Wexler, T. & Roughgarden, T. 2004. The price of stability for network design with fair cost allocation. In Foundations of Computer Science (FOCS), 295–304. IEEE.Google Scholar
Ashlagi, I., Monderer, D. & Tennenholtz, M. 2008. On the value of correlation. Journal of Artificial Intelligence Research 33, 575613.Google Scholar
Aumann, R. J. 1974. Subjectivity and correlation in randomized strategies. Journal of mathematical Economics 1(1), 6796.Google Scholar
Aumann, R. J. & Hart, S. 2003. Long cheap talk. Econometrica 71(6), 16191660.Google Scholar
Barman, S. & Ligett, K. 2015. Finding any nontrivial coarse correlated equilibrium is hard. In ACM Conference on Economics and Computation (EC).CrossRefGoogle Scholar
Blum, A., Even-Dar, E. & Ligett, K. 2010. Routing without regret: on convergence to Nash equilibria of regret-minimizing algorithms in routing games. Theory of Computing 6(1), 179199.Google Scholar
Blum, A., Hajiaghayi, M., Ligett, K. & Roth, A. 2008. Regret minimization and the price of total anarchy. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, 373–382. ACM.CrossRefGoogle Scholar
Bowling, M. 2005. Convergence and no-regret in multiagent learning. Advances in Neural Information Processing Systems 17, 209216.Google Scholar
Bradonjic, M., Ercal-Ozkaya, G., Meyerson, A. & Roytman, A. 2009. On the price of mediation. In Proceedings of the 10th ACM Conference on Electronic Commerce, 315–324. ACM.CrossRefGoogle Scholar
Brafman, R. I. & Tennenholtz, M. 2004. Efficient learning equilibrium. Artificial Intelligence 159(1), 2747.Google Scholar
Conitzer, V. & Sandholm, T. 2007. Awesome: a general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Machine Learning 67(1–2), 2343.Google Scholar
Daskalakis, C., Goldberg, P. W. & Papadimitriou, C. H. 2009. The complexity of computing a Nash equilibrium. SIAM Journal on Computing 39(1), 195259.Google Scholar
Foster, D. P. & Vohra, R. V. 1997. Calibrated learning and correlated equilibrium. Games and Economic Behavior 21(1), 4055.CrossRefGoogle Scholar
Friedman, J. W. 1971. A non-cooperative equilibrium for supergames. The Review of Economic Studies 38(1), 112.Google Scholar
Greenwald, A. & Jafari, A. 2003. A general class of no-regret learning algorithms and game-theoretic equilibria. In Learning Theory and Kernel Machines, 2–12. Springer.Google Scholar
Hart, S. & Mansour, Y. 2007. The communication complexity of uncoupled Nash equilibrium procedures. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, 345–353. ACM.Google Scholar
Hoeffding, W. 1963. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association 58(301), 1330.Google Scholar
Kleinberg, R., Piliouras, G. & Tardos, É. 2009. Multiplicative updates outperform generic no-regret learning in congestion games. In ACM Symposium on Theory of Computing (STOC).Google Scholar
Kleinberg, R., Piliouras, G. & Tardos, É. 2011. Load balancing without regret in the bulletin board model. Distributed Computing 24(1), 2129.Google Scholar
Koutsoupias, E. & Papadimitriou, C. H. 1999. Worst-case equilibria. In STACS, 404–413.Google Scholar
Littman, M. L. & Stone, P. 2005. A polynomial-time Nash equilibrium algorithm for repeated games. Decision Support Systems 39(1), 5566.Google Scholar
Nash, J. 1951. Non-cooperative games. Annals of Mathematics 54, 286295.Google Scholar
Palaiopanos, G., Panageas, I. & Piliouras, G. 2017. Multiplicative weights update with constant step-size in congestion games: convergence, limit cycles and chaos. CoRR, abs/1703.01138, http://arxiv.org/abs/1703.01138.Google Scholar
Roughgarden, T. 2009. Intrinsic robustness of the price of anarchy. In Proceedings of STOC, 513–522.Google Scholar
Sandholm, W. H. 2010. Population Games and Evolutionary Dynamics. MIT press.Google Scholar
Shoham, Y., Powers, R. & Grenager, T. 2007. If multi-agent learning is the answer, what is the question? Artificial Intelligence 171(7), 365377.Google Scholar
Young, H. 2004. Strategic Learning and Its Limits. Arne Ryde memorial lectures, Oxford University Press. https://books.google.fr/books?id=3oUBoQEACAAJ.Google Scholar