Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-26T07:02:23.742Z Has data issue: false hasContentIssue false

ECHO STATE QUEUEING NETWORKS: A COMBINATION OF RESERVOIR COMPUTING AND RANDOM NEURAL NETWORKS

Published online by Cambridge University Press:  17 May 2017

Sebastián Basterrech
Affiliation:
Department of Computer Science, Faculty of Electrical Engineering, Czech Technical University, Karlovo náměstí 13, 121 35 Prague 2, Praha, Czech Republic E-mail: [email protected]
Gerardo Rubino
Affiliation:
Inria Rennes – Bretagne Atlantique, Campus de Beaulieu, 35042 Rennes Cedex, France E-mail: [email protected]

Abstract

This paper deals with two ideas appeared during the last developing phase in Artificial Intelligence: Reservoir Computing (RC) and Random Neural Networks. Both have been very successful in many applications. We propose a new model belonging to the first class, taking the structure of the second for its dynamics. The new model is called Echo State Queuing Network. The paper positions the model in the global Machine Learning area, and provides examples of its use and performances. We show on largely used benchmarks that it is a very accurate tool, and we illustrate how it compares with standard RC models.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Bakircioğlu, H. & Koçak, T. (2000). Survey of random neural network applications. European Journal of Operational Research 126(2): 319330.CrossRefGoogle Scholar
2. Basterrech, S. (2017). Empirical analysis of the necessary and sufficient conditions of the echo state property. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2017). Available at: https://arxiv.org/abs/1703.06664.Google Scholar
3. Basterrech, S., Alba, E., & Snášel, V. (2014). An experimental analysis of the echo state network initialization using the particle swarm optimization. In Nature and Biologically Inspired Computing (NaBIC), 2014 Sixth World Congress on, pp. 214219.CrossRefGoogle Scholar
4. Basterrech, S., Fyfe, C., & Rubino, G. (2011). Self-organizing maps and scale-invariant maps in echo state networks. In 11th International Conference on Intelligent Systems Design and Applications, ISDA 2011, Córdoba, Spain, November 22–24, 2011, pp. 9499.CrossRefGoogle Scholar
5. Basterrech, S., Janoušek, J., & Snášel, V. (2016). A performance study of random neural network as supervised learning tool using CUDA. Journal of Internet Technology 17(4): 771778.Google Scholar
6. Basterrech, S., Mohamed, S., Rubino, G., & Soliman, M. (2011). Levenberg-Marquardt training algorithms for random neural networks. Computer Journal 54(1): 125135.CrossRefGoogle Scholar
7. Basterrech, S. & Rubino, G. (2013). Echo state queueing network: A new reservoir computing learning tool. In 10th IEEE Consumer Communications and Networking Conference, CCNC 2013, Las Vegas, NV, USA, January 11–14, 2013, pp. 118123.CrossRefGoogle Scholar
8. Basterrech, S. & Rubino, G. (2015). Random neural network as supervised learning tool. Neural Network World 25: 457499.CrossRefGoogle Scholar
9. Brun, O., Wang, L., & Gelenbe, E. (2016). Big data for autonomic intercontinental overlays. IEEE Journal on Selected Areas in Communications 34(3): 575583.CrossRefGoogle Scholar
10. Butcher, J.B., Verstraeten, D., Schrauwen, B., Day, C.R., & Haycock, P.W. (2013). Reservoir computing and extreme learning machines for non-linear time-series data analysis. Neural Networks 38: 7689.CrossRefGoogle ScholarPubMed
11. Doya, K. (1992). Bifurcations in the learning of recurrent neural networks. In IEEE International Symposium on Circuits and Systems, pp. 27772780.Google Scholar
12. Elman, J.L. (1990). Finding structure in time. Cognitive Science 14: 179211.CrossRefGoogle Scholar
13. Ferreira, A.A., Ludermir, T.B., & De Aquino, R.R.B. (2013). An approach to reservoir computing design and training. Expert Systems with Applications, 40(10): 41724182.CrossRefGoogle Scholar
14. Gelenbe, E. (1989). Random neural networks with negative and positive signals and product form solution. Neural Computation 1(4): 502510.CrossRefGoogle Scholar
15. Gelenbe, E. (1991). Product-form queueing networks with negative and positive customers. Journal of Applied Probability 28(3): 656663.CrossRefGoogle Scholar
16. Gelenbe, E. (1993). Learning in the recurrent random neural network. Neural Computation 5(1): 154511.CrossRefGoogle Scholar
17. Gelenbe, E. (1998). The spiked random neural network: Nonlinearity, learning and approximation. In Proceedings of the Fifth IEEE International Workshop on Cellular Neural Networks and Their Applications, London, England, pp. 1419.CrossRefGoogle Scholar
18. Gelenbe, E., Mao, Z., & Li, Y. (2004). Function approximation by random neural networks with a bounded number of layers. Journal of Differential Equations and Dynamical Systems 12: 143170.Google Scholar
19. Gelenbe, E. & Schassberger, M. (1992). Stability of product form G-networks. Probability in the Engineering and Informational Sciences 6: 271276.CrossRefGoogle Scholar
20. Gelenbe, E. (1993). G-networks with signals and batch removal. Probability in the Engineering and Informational Sciences 7: 335342.CrossRefGoogle Scholar
21. Gelenbe, E. (1993). Learning in the recurrent random neural network. Neural Computation 5: 154164.CrossRefGoogle Scholar
22. Gelenbe, E. (2009). Steps toward self-aware networks. Communications of the ACM 52(7): 6675.CrossRefGoogle Scholar
23. Gelenbe, E. & Fourneau, J.-M. (2002). G-networks with resets. Performance Evaluation 49(1): 179191.CrossRefGoogle Scholar
24. Gelenbe, E., Mao, Z.H., & Li, Y.D. (2004). Function approximation by random neural networks with a bounded number of layers. Journal of Differential Equations and Dynamical Systems 12(1-2): 143170.Google Scholar
25. Gelenbe, E. & Hussain, K. (2002). Learning in the multiple class random neural network. IEEE Transactions on Neural Networks, 13(6): 12571267.CrossRefGoogle ScholarPubMed
26. Gelenbe, E., Mao, Z.-H., & Li, Y.-D. (1999). Function approximation with spiked random networks. IEEE Transactions on Neural Networks 10(1): 39.CrossRefGoogle ScholarPubMed
27. Gelenbe, E. & Timotheou, S. (2008). Random neural networks with synchronized interactions. Neural Computation 20(9): 23082324.CrossRefGoogle ScholarPubMed
28. Gelenbe, E. & Yin, Y. (2016). Deep learning with random neural networks. In SAI Intelligent Systems Conference 2016. IEEEXpress.CrossRefGoogle Scholar
29. Hochreiter, S. & Schmidhuber, J. (1997). Long short-term memory. Neural Computation 9(8): 17351780.CrossRefGoogle ScholarPubMed
30. Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent neural networks. Technical Report 148, German National Research Center for Information Technology.Google Scholar
31. Jordan, M. (1986). Serial order: A parallel distributed processing approach. Technical Report 8604, Institute for Cognitive Science, University of California, San Diego, USA.Google Scholar
32. Likas, A. & Stafylopatis, A. (2000). Training the random neural network using Quasi-Newton methods. European Journal of Operational Research 126: 331339.CrossRefGoogle Scholar
33. Lukos̆evic̆ius, M. (2010). On self-organizing reservoirs and their hierarchies. Technical Report 25, Jacobs University, Bremen.Google Scholar
34. Lukos̆evic̆ius, M. & Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review 3: 127149.CrossRefGoogle Scholar
35. Maass, W. (1999). Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons. Technical Report TR–1999–037, Institute for Theorical Computer Science. Technische Universitaet Graz. Graz, Austria.Google Scholar
36. Maass, W., Natschläger, T., & Markram, H. (2002). Real-time computing without stable states: a new framework for a neural computation based on perturbations. Neural Computation 14: 25312560.CrossRefGoogle Scholar
37. Maass, W., Natschläger, T., & Markram, H. (2003). Computational models for generic cortical microcircuits. In Koetter, R. (ed.), Neuroscience databases. A practical guide. Boston, USA: Kluwer Academic Publishers, pp. 121136.Google Scholar
38. Mohamed, S. & Rubino, G. (2002). A study of real-time packet video quality using random neural networks. IEEE Transactions on Circuits and Systems for Video Technology 12(12): 10711083.CrossRefGoogle Scholar
39. Pascanu, R., Mikolov, T., & Bengio, Y. (2013). On the difficulty of training recurrent neural networks. Proceedings of the 30th International Conference on Machine Learning 28: 3748.Google Scholar
40. Paugam-Moisy, H. & Bohte, S.M. (2009). Computing with spiking neuron networks. In Rozenberg, G., Back, T. & Kok, J. (eds), Handbook of natural computing. Springer-Verlag: lHeidelberg. http://liris.cnrs.fr/publis/?id=4305.Google Scholar
41. Press, W., Teukolsky, S., Vetterling, W., & Flannery, B. (1992). Numerical recipes in C, 2nd ed. Cambridge, UK: Cambridge University Press.Google Scholar
42. Robinzonov, N., Tutz, G., & Hothorn, T. (2012). Boosting techniques for nonlinear time series models. AStA Advances in Statistical Analysis 96(1): 99122.CrossRefGoogle Scholar
43. Rodan, A. & Tin̆o, P. (2011). Minimum complexity echo state network. IEEE Transactions on Neural Networks 22: 131144.CrossRefGoogle ScholarPubMed
44. Rubino, G. & Varela, M. (2004). A new approach for the prediction of end-to-end performance of multimedia streams. In 1st International Conference on Quantitative Evaluation of Systems (QEST 2004), 27–30 September 2004, Enschede, The Netherlands, pp. 110119.CrossRefGoogle Scholar
45. Rubino, G. (2005). Quantifying the quality of audio and video transmissions over the Internet: The psqa approach. In Barria, J.J. (ed.), Design and operations of communication networks: A review of wired and wireless modelling and management challenges. Imperial College Press: lLondon.Google Scholar
46. Rumelhart, D.E., Hinton, G.E., & McClelland, J.L. (1986). A general framework for parallel distributed processing. In Rumelhart, D.E., McClelland, J.L., & CORPORATE PDP Research Group (eds), Parallel distributed processing: explorations in the microstructure of cognition, volume 1 of computational models of cognition and perception, chapter 2. Cambridge, MA: MIT Press, pp. 4576.CrossRefGoogle Scholar
47. Schrauwen, B., Wardermann, M., Verstraeten, D., Steil, J.J., & Stroobandt, D. (2007). Improving reservoirs using intrinsic plasticity. Neurocomputing 71: 11591171.CrossRefGoogle Scholar
48. Sheng, C., Zhao, J., Wang, W., & Leung, H. (2013). Prediction intervals for a noisy nonlinear time series based on a boostrapping reservoir computing network ensemble. IEEE Transactions on Neural Networks and Learning Systems, 24(7): 10361048.CrossRefGoogle Scholar
49. Siegelmann, H.T. & Sontag, E.D. (1991). Turing computability with neural nets. Applied Mathematics Letters 4(6): 7780.CrossRefGoogle Scholar
50. Steil, J.J. (2004). Backpropagation–decorrelation: Online recurrent learning with O(N) complexity. In Proceedings of IJCNN’04, vol. 1, pp. 834848.CrossRefGoogle Scholar
51. Timotheou, S. (2010). The random neural network: A survey. The Computer Journal, 53(3): 251267.CrossRefGoogle Scholar
52. Verstraeten, D., Schrauwen, B., D'Haene, M., & Stroobandt, D. (2007). An experimental unification of reservoir computing methods. Neural Networks 20(3): 287289.CrossRefGoogle ScholarPubMed
53. Wainrib, G. & Galtier, M.N. (2016). A local echo state property through the largest Lyapunov exponent. Neural Networks 76: 3945.CrossRefGoogle ScholarPubMed