Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-29T08:18:24.894Z Has data issue: false hasContentIssue false

An online scalarization multi-objective reinforcement learning algorithm: TOPSIS Q-learning

Published online by Cambridge University Press:  13 June 2022

Mohammad Mirzanejad
Affiliation:
Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran; e-mail: [email protected]; [email protected]; [email protected]
Morteza Ebrahimi
Affiliation:
Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran; e-mail: [email protected]; [email protected]; [email protected]
Peter Vamplew
Affiliation:
School of Engineering, Information Technology and Physical Sciences, Federation University Australia, Ballarat, Australia; e-mail: [email protected]
Hadi Veisi
Affiliation:
Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran; e-mail: [email protected]; [email protected]; [email protected]

Abstract

Conventional reinforcement learning focuses on problems with single objective. However, many problems have multiple objectives or criteria that may be independent, related, or contradictory. In such cases, multi-objective reinforcement learning is used to propose a compromise among the solutions to balance the objectives. TOPSIS is a multi-criteria decision method that selects the alternative with minimum distance from the positive ideal solution and the maximum distance from the negative ideal solution, so it can be used effectively in the decision-making process to select the next action. In this research a single-policy algorithm called TOPSIS Q-Learning is provided with focus on its performance in online mode. Unlike all single-policy methods, in the first version of the algorithm, there is no need for the user to specify the weights of the objectives. The user’s preferences may not be completely definite, so all weight preferences are combined together as decision criteria and a solution is generated by considering all these preferences at once and user can model the uncertainty and weight changes of objectives around their specified preferences of objectives. If the user only wants to apply the algorithm for a specific set of weights the second version of the algorithm efficiently accomplishes that.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Barrett, L. & Narayanan, S. 2008. Learning all optimal policies with multiple criteria. In Proceedings of the 25th International Conference on Machine Learning, New York, NY, USA, pp. 4147.Google Scholar
Gabor, Z., Kalmar, Z. & Szepesvari, C. 1998. Multi-criteria reinforcement learning. In The Fifteenth International Conference on Machine Learning, San Francisco, CA, USA, pp. 197205.Google Scholar
Geibel, P. 2006. Reinforcement learning for MDPs with Constraints. In Machine Learning: ECML 2006, Lecture Notes in Computer Science, vol. 4212.Google Scholar
Hwang, C. L. & Yoon, K. 1981. Multiple Attribute Decision Making: Methods and Applications, Lecture Notes in Economics and Mathematical Systems. Springer-Verlag.Google Scholar
Hwang, C. L. & Yoon, K. 1981. Multiple Attribute Decision Making: Methods and Applications. Springer-Verlag.CrossRefGoogle Scholar
Issabekov, R. and Vamplew, P. 2012. An empirical comparison of two common multiobjective reinforcement learning algorithms. In AI 2012: Advances in Artificial Intelligence. Lecture Notes in Computer Science, vol. 7691, pp. 626636.Google Scholar
Keeney, R. L. & Raiffa, H. 1976. Decision with Multiple Objectives: Preferences and Value Tradeoffs. Wiley.Google Scholar
MacCrimmon, K. R. & Toda, M. 1969. The experimental determination of indifference curves. The Review of Economic Studies, 36(4), 433450.CrossRefGoogle Scholar
MacCrimmon, K. R. & Wehrung, D. A. 1977. Trade-off Analysis: The Indifference and Preferred Proportions Approaches, Conflicting Objectives in Decisions. Wiley, pp. 123147.Google Scholar
Moffaert, K. V. 2014. Multi-criteria reinforcement learning for sequential decision making problems, Ph.D. dissertation, Dept. Comput. Sci., Vrije Universiteit Brussel., Brussels, Belgium.Google Scholar
Moffaert, K. V., Drugan, M. M. & Nowé, A. 2013. Scalarized multi-objective reinforcement learning: Novel design techniques. In IEEE ADPRL, Singapore, pp. 191199.Google Scholar
Moffaert, K. V. & Nowé, A. 2014. Multi-objective reinforcement learning using sets of pareto dominating policies. Journal of Machine Learning Research 15, 34833512.Google Scholar
Nguyen, T. T., Nguyen, N. D., Vamplew, P., Nahavandi, S., Dazeley, R. & Lim, C. P. 2020. A multi-objective deep reinforcement learning framework. Engineering Applications of Artificial Intelligence 96.Google Scholar
Roijers, D. M., Röpke, W., Nowe, A. & Radulescu, R. 2021. On following pareto-optimal policies in multi-objective planning and reinforcement learning. Paper Presented at Multi-Objective Decision Making Workshop 2021.Google Scholar
Roijers, D. M., Vamplew, P., Whiteson, S. & Dazeley, R. 2013. A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research 48(1), 67113.CrossRefGoogle Scholar
Roijers, D. M., Zintgraf, L. M., Libin, P. & Nowé, A. 2018. Interactive multi-objective reinforcement learning in multi-armed bandits for any utility function. In ALA Workshop at FAIM, vol. 8.Google Scholar
Roijers, D. M., Zintgraf, L. M., Libin, P., Reymond, M., Bargiacchi, E. & Nowé, A. 2020. Interactive multi-objective reinforcement learning in multi-armed bandits with gaussian process utility models. In ECML-PKDD 2020: Proceedings of the 2020 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.CrossRefGoogle Scholar
Roijers, D. M., Zintgraf, L. M. & Nowé, A. 2017. Interactive thompson sampling for multi-objective multi-armed bandits. In Algorithmic Decision Theory, ADT 2017, Lecture Notes in Computer Science, vol. 10576. Springer.CrossRefGoogle Scholar
Sutton, R. S. and Barto, A. G. 1998. Reinforcement Learning: An Introduction . Adaptive Computation and Machine Learning. MIT Press.Google Scholar
Tsitsiklis, J. N. 1994. Asynchronous stochastic approximation and q-learning. Journal of Machine Learning 16(3), 185202.CrossRefGoogle Scholar
Vamplew, P., Dazeley, R., Berry, A., Issabekov, R. & Dekker, E. 2011. Empirical evaluation methods for multiobjective reinforcement learning algorithms. Machine Learning 84, 5180.CrossRefGoogle Scholar
Vamplew, P., Dazeley, R. & Foale, C. 2017. Softmax exploration strategies for multiobjective reinforcement learning. Neurocomputing, 263, 7486.CrossRefGoogle Scholar
Vamplew, P., Issabekov, R., Dazeley, R., Foale, C., Berry, A., Moore, T. & Creighton, D. 2017. Steering approaches to Pareto-optimal multiobjective reinforcement learning. Neurocomputing 263, 2638.CrossRefGoogle Scholar
Vamplew, P., Yearwood, J., Dazeley, R. & Berry, A. 2008. On the limitations of scalarization for multi-objective reinforcement learning of Pareto fronts. In AI 2008: Advances in Artificial Intelligence. Lecture Notes in Computer Science, vol. 5360, pp. 372378.Google Scholar
Watkins, C. 1989. Learning from delayed rewards, Ph.D. thesis, University of Cambridge, England.Google Scholar
Yoon, K. 1980. Systems selection by multiple attribute decision making, Ph.D. Dissertation, Kansas State University, Manhattan, Kansas.Google Scholar