Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-30T23:29:31.208Z Has data issue: false hasContentIssue false

Convergence of a global stochastic optimization algorithm with partial step size restarting

Published online by Cambridge University Press:  19 February 2016

G. Yin*
Affiliation:
Wayne State University
*
Postal address: Department of Mathematics, Wayne State University, Detroit, MI 48202, USA. Email address: [email protected]

Abstract

This work develops a class of stochastic global optimization algorithms that are Kiefer-Wolfowitz (KW) type procedures with an added perturbing noise and partial step size restarting. The motivation stems from the use of KW-type procedures and Monte Carlo versions of simulated annealing algorithms in a wide range of applications. Using weak convergence approaches, our effort is directed to proving the convergence of the underlying algorithms under general noise processes.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2000 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This research was supported in part by the National Science Foundation under grants DMS-9877090 and DMS-9971608.

References

[1] Cai, X., Kelly, P. and Gong, W. B. (1995). Digital diffusion network for image segmentation. Proc. IEEE Internat. Conf. Image Processing, Vol III, pp. 7376.Google Scholar
[2] Chen, H.-F. (1998). Stochastic approximation with non-additive measurement noise. J. Appl. Prob. 35, 407417.CrossRefGoogle Scholar
[3] Chiang, T. S., Hwang, C. R. and Sheu, S. J. (1987). Diffusion for global optimization in rrn . SIAM J. Control Optim. 25, 737752.Google Scholar
[4] Dippon, J. (1998). Globally convergent stochastic optimization with optimal asymptotic distribution. J. Appl. Prob. 35, 395406.CrossRefGoogle Scholar
[5] Dippon, J. and Fabian, V. (1994). Stochastic approximation of global minimum points. J. Statist. Plann. Inference 41, 327347.Google Scholar
[6] Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes, Characterization and Convergence. John Wiley, New York.CrossRefGoogle Scholar
[7] Eweda, E. and Macchi, O. (1983). Quadratic mean and almost sure convergence of unbounded stochastic approximation algorithm with correlated observations. Ann. Inst. Henri Poincaré 19, 235255.Google Scholar
[8] Gelfand, S. B. and Mitter, S. K. (1991). Recursive stochastic algorithms for global optimization in rrd . SIAM J. Control Optim. 29, 9991018.Google Scholar
[9] Geman, D. and Geman, S. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Machine Intelligence 6, 721741.Google Scholar
[10] Gidas, B. (1985). Nonstationary Markov chains and convergence of the annealing algorithm. J. Statist. Physics 39, 73131.Google Scholar
[11] Hwang, C.-R. (1980). Laplace's method revised: weak convergence of probability measures. Ann. Prob. 8, 11771182.Google Scholar
[12] Kirkpatrick, S., Gelatt, C. D. and Vecchi, M. P. (1983). Optimization by simulated annealing. Science 220, 671680.Google Scholar
[13] Kushner, H. J. (1984). Approximation and Weak Convergence Methods for Random Processes, with Applications to Stochastic Systems Theory. MIT Press, Cambridge.Google Scholar
[14] Kushner, H. J. (1987). Asymptotic global behavior for stochastic approximation and diffusions with slowly decreasing noise effects: global minimization via Monte Carlo. SIAM J. Appl. Math. 47, 169185.Google Scholar
[15] Kushner, H. J. and Yin, G. (1997). Stochastic Approximation Algorithms and Applications. Springer, New York.CrossRefGoogle Scholar
[16] L'Ecuyer, P. and Yin, G. (1998). Budget-dependent convergence rate of stochastic approximation. SIAM J. Optim. 8, 217247.Google Scholar
[17] Manjunath, B., Simchony, T. and Chellappa, R. (1990). Stochastic and deterministic networks for texture segmentation. IEEE Trans. ASSP. 38, 10391049.Google Scholar
[18] Metropolis, M., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953). Equations of state calculations by fast computing machines. J. Chem. Phys. 21, 10871091.Google Scholar
[19] Müller, H.-G. (1989). Adaptive nonparametric peak estimation. Ann. Statist. 17, 10531069.CrossRefGoogle Scholar
[20] Révész, P., (1977). How to apply the method of stochastic approximation in the non-parametric estimation of regression function. Matem. Operations Stat. Ser. Statist. 8, 119126.Google Scholar
[21] Spall, J. C. (1992). Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Trans. Automat. Control AC-37, 331341.CrossRefGoogle Scholar
[22] Wang, I. J., Chong, E. K. P. and Kulkarni, S. R. (1996). Equivalent necessary and sufficient conditions on noise sequences for stochastic approximation algorithms. Adv. Appl. Prob. 28, 784801.Google Scholar
[23] Wong, E. (1991). Stochastic neural networks. Algorithmica 6, 466478.CrossRefGoogle Scholar
[24] Yakowitz, S. (1993). A globally convergent stochastic approximation. SIAM J. Control Optim. 31, 3040.Google Scholar
[25] Yin, G. (1999). Rates of convergence for a class of global stochastic optimization algorithms. SIAM J. Optim. 10, 99120.CrossRefGoogle Scholar