Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-23T22:52:09.179Z Has data issue: false hasContentIssue false

A practical support vector regression algorithm and kernel function for attritional general insurance loss estimation

Published online by Cambridge University Press:  24 August 2020

Shadrack Kwasa*
Affiliation:
Institutional Investment, London and Capital Asset Management, Two Fitzroy Place, 8 Mortimer Street, London, W1T 3JJ, UK
Daniel Jones
Affiliation:
Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory, Quarter (550), Woodstock Road, Oxford, OX2 6GG
*
*Corresponding author. E-mail: [email protected]

Abstract

The aim of the paper is to derive a simple, implementable machine learning method for general insurance losses. An algorithm for learning a general insurance loss triangle is developed and justified. An argument is made for applying support vector regression (SVR) to this learning task (in order to facilitate transparency of the learning method as compared to more “black-box” methods such as deep neural networks), and SVR methodology derived is specifically applied to this learning task. A further argument for preserving the statistical features of the loss data in the SVR machine is made. A bespoke kernel function that preserves the statistical features of the loss data is derived from first principles and called the exponential dispersion family (EDF) kernel. Features of the EDF kernel are explored, and the kernel is applied to an insurance loss estimation exercise for homogeneous risk of three different insurers. Results of the cumulative losses and ultimate losses predicted by the EDF kernel are compared to losses predicted by the radial basis function kernel and the chain-ladder method. A backtest of the developed method is performed. A discussion of the results and their implications follows.

Type
Paper
Copyright
© Institute and Faculty of Actuaries 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aksoy, S. & Haralick, R. (2001). Feature normalization and likelihood-based similarity measures for image retrieval. Pattern Recognition Letters, 22(5), 563582.CrossRefGoogle Scholar
Al Shakabi, L. & Shaaban, Z. (2006). Normalization as a preprocessing engine for data mining and the approach of preference matrix. In Proceedings of the International Conference on Dependability of Computer Systems, Poland (pp. 207214).Google Scholar
Applebaum, D. (2009). Levy Processes and Stochastic Calculus, 2nd edition. Cambridge University Press, Cambridge.CrossRefGoogle Scholar
AXIS capital: loss development triangles. Available online at the address https://investor.axiscapital.com/CustomPage/Index?KeyGenPage=1073743354.html [accessed February 2020].Google Scholar
Bergstra, J., Bardenet, R., Bengio, Y. & Kegl, B. (2011). Algorithms for hyper-parameter optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems (pp. 25462554).Google Scholar
Cummins, J.D. (2002). Allocation of capital in the insurance industry. Risk Management and Insurance Review, 3(1), 727.CrossRefGoogle Scholar
England, P.D. & Verrall, R.J. (2002). Stochastic claims reserving in general insurance. British Actuarial Journal, 8(iii), 443518.CrossRefGoogle Scholar
Halliwell, J.L. (2007). Chain-Ladder bias: its reason and meaning. Variance, 1(2), 214247.Google Scholar
Horn, R.A. (1967). On infinitely divisible matrices, kernels and functions. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 8(3), 219230.CrossRefGoogle Scholar
Jorgensen, B. (1992). Exponential dispersion models and extensions: a review. International Statistical Review, 60, 520.CrossRefGoogle Scholar
Kuo, K. (2019). Deep triangle: a deep learning approach to loss reserving. In CAS E-Forum Summer 2019.Google Scholar
Landsman, Z. & Makov, U.E. (1999). Credibility evaluation for the exponential dispersion family. Insurance: Mathematics and Economics, 24(1)(2), 23–29.CrossRefGoogle Scholar
Lowe, J. & Pryor, L. (1996). Neural Networks v. GLMs in pricing general insurance. In General Insurance Convention.Google Scholar
Mack, T. (1993). Distribution-free calculation of the standard error of chain ladder reserve estimates. ASTIN Bulletin, 23(2), 213225.CrossRefGoogle Scholar
McCullagh, P. & Nelder, J.A. (1989). Generalized Linear Models. Chapman and Hall, London.CrossRefGoogle Scholar
Muandet, K., Fukuzima, K., Sriperumbudur, B. & Schölkopf, B. (2017). Kernel mean embedding of distributions: a review and beyond. Foundations and Trends in Machine Learning, 10, 1141.CrossRefGoogle Scholar
Murphy, D.M. (2007). Chain ladder reserve risk estimators. In CAS E-Forum Summer 2007.Google Scholar
Murphy, K. & McLennan, A. (2006). A method for projecting individual large claims. In Casualty Loss Reserving Seminar, Atlanta.Google Scholar
Packova, V. & Brebera, D. (2015). Loss distributions in insurance risk management. In Recent Advances on Economics and Business Administration, Spain (pp. 1725).Google Scholar
Sakr, G.E., Mokbel, M., Darwich, A., Khneisser, M.N. & Hadi, A. (2016). Comparing deep learning and support vector machines for autonomous waste sorting. In International Multidisciplinary Conference on Engineering Technology, Beirut (pp. 207212).CrossRefGoogle Scholar
Sato, K. (1999). Levy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge.Google Scholar
Schölkopf, B. & Smola, A. (2002). Learning with Kernels. MIT Press, Cambridge, MA.Google Scholar
Smola, A., Gretton, A., Song, L. & Schölkopf, B. (2007). A Hilbert space embedding for distributions. In 18th International Conference on Algorithmic Learning Theory (pp. 1331).CrossRefGoogle Scholar
Sriperumbudur, B.K., Gretton, A., Fukumizu, K., Schölkopf, B. & Lanckriet, G.R.G. (2010). Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11, 15171561.Google Scholar
Steutel, F.W. & van Harn, K. (2003). Infinite Divisibility of Probability Distributions on the Real Line, 1st edition. CRC Press, Boca Raton.CrossRefGoogle Scholar
Swiss Re: loss ratio development triangles 2018, 2017 and 2016, motor. Available online at the address https://www.swissre.com/investors/financial-information.htmll [accessed February 2020].Google Scholar
Taylor, G. (2015). Bayesian Chain Ladder Models. ASTIN Bulletin, 45(1), 7599.CrossRefGoogle Scholar
Tweedie, M.C.K. (1984). An index which distinguishes between some important exponential families. In Proceedings of the Indian Statistical Golden Jubilee International Conference (pp. 579604).Google Scholar
Vapnik, V. & Cortes, C. (1995). Support-vector networks. Machine Learning, 20(3), 273297.Google Scholar
Wüthrich, M.V. (2018a). Machine learning in individual claims reserving. Scandinavian Actuarial Journal, 2018(6), 465480.CrossRefGoogle Scholar
Wüthrich, M.V. (2018b). Neural networks applied to chain-ladder reserving European Actuarial Journal, 8, 407436.CrossRefGoogle Scholar
Yunos, Z.M., Ali, A., Shamsuddin, S.M. & Noriszura, I. (2016). Predictive modelling for motor insurance claims using artificial neural networks. International Journal of Advances in Soft Computing and its Applications, 8(3), 160172.Google Scholar
Zurich Insurance Group Ltd: P&C Reserve disclosure 2018, 2017 and 2016, motor. Available online at the address https://www.zurich.com/en/investor-relations/results-and-reportsl [accessed February 2020].Google Scholar