Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-27T09:28:10.819Z Has data issue: false hasContentIssue false

FAST NON-NEGATIVE LEAST-SQUARES LEARNING IN THE RANDOM NEURAL NETWORK

Published online by Cambridge University Press:  18 May 2016

Stelios Timotheou*
Affiliation:
KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Cyprus E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The random neural network is a biologically inspired neural model where neurons interact by probabilistically exchanging positive and negative unit-amplitude signals that has superior learning capabilities compared to other artificial neural networks. This paper considers non-negative least squares supervised learning in this context, and develops an approach that achieves fast execution and excellent learning capacity. This speedup is a result of significant enhancements in the solution of the non-negative least-squares problem which regard (a) the development of analytical expressions for the evaluation of the gradient and objective functions and (b) a novel limited-memory quasi-Newton solution algorithm. Simulation results in the context of optimizing the performance of a disaster management problem using supervised learning verify the efficiency of the approach, achieving two orders of magnitude execution speedup and improved solution quality compared to state-of-the-art algorithms.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Cambridge University Press 2016

References

1.Abdelbaki, H., Gelenbe, E. & Kocak, T. (2005). Neural algorithms and energy measures for EMI based mine detection. Journal of Differential Equations and Dynamical Systems 13(1–2): 6386.Google Scholar
2.Aguilar, J. & Gelenbe, E. (1997). Task assignment and transaction clustering heuristics for distributed systems. Information Sciences—Informatics and Computer Science 97(1 & 2): 199221.Google Scholar
3.Artalejo, J.R. (2000). G-networks: a versatile approach for work removal in queueing networks. European Journal of Operational Research 126(2): 233249.Google Scholar
4.Basterrech, S., Mohammed, S., Rubino, G. & Soliman, M. (2011). Levenberg–Marquardt training algorithms for random neural networks. The Computer Journal 54(1): 125135.Google Scholar
5.Bertsekas, D. (1995). Nonlinear programming. Nashua, NH, USA: Athena Scientific.Google Scholar
6.Biegler-Konig F. & Barmann, F. (1993). A learning algorithm for multilayered neural networks based on linear least squares problems. Neural Networks 6(1): 127131.Google Scholar
7.Bro R. & Long, S.D. (1997). A fast non-negativity-constrained least squares algorithm. Journal of Chemometrics 11(5): 393401.Google Scholar
8.Byrd, R.H., Nocedal, J. & Schnabel, R.B. (1994). Representations of quasi-Newton matrices and their use in limited memory methods. Mathematical Programming 63: 129156.Google Scholar
9.Calamai, P. & Moré, J. (1987). Projected gradient methods for linearly constrained problems. Mathematical Programming 39: pp. 93116.Google Scholar
10.Castillo, E., Fontenla-Romero, O., Guijarro-Berdinas, B. & Alonso-Betanzos, A. (2002). A global optimum approach for one-layer neural networks. Neural Computation 14(6): 14291449.Google Scholar
11.Castillo, E., Guijarro-Berdinas, B., Fontenla-Romero, O. & Alonso-Betanzos, A. (2006). A very fast learning method for neural networks based on sensitivity analysis. Journal of Machine Learning Research 7: pp. 11591182.Google Scholar
12.Cho, S.-y. & Chow, T.W.S. (1999). Training multilayer neural networks using fast global learning algorithm—least-squares and penalized optimization methods. Neurocomputing 25(1–3): 115131.Google Scholar
13.Dax, A. (1991). On computational aspects of bounded linear least squares problems. ACM Transactions on Mathematical Software 17: 6473.Google Scholar
14.Erdogmus, D., Fontenla-Romero, O., Principe, J., Alonso-Betanzos, A. & Castillo, E. (2005). Linear-least-squares initialization of multilayer perceptrons through backpropagation of the desired response. IEEE Transactions on Neural Networks 16(2): 325337.Google Scholar
15.Fontenla-Romero, O., Erdogmus, D., Principe, J.C., Alonso-Betanzos, A. & Castillo, E. (2003). Linear least-squares based methods for neural networks learning. In Proceedings of the International Conference on Artificial Neural Networks and Neural Information Processing, Istanbul, Turkey, 26–29 June, pp. 84–91, Heidelberg: Springer-Verlag.Google Scholar
16.Gelenbe, E. (1989). Random neural networks with negative and positive signals and product form solution. Neural Computation 1(4): 502510.Google Scholar
17.Gelenbe, E. (1990). Stability of the random neural network. Neural Computation 2(2): 239247.Google Scholar
18.Gelenbe, E. (1991). Product-form queueing networks with negative and positive customers. Journal of Applied Probability 28(3): 656663.Google Scholar
19.Gelenbe, E. (1991). Theory of the random neural network. In Neural Networks: Advances and Applications. ed Gelenbe, E.. Amsterdam, The Netherlands: Elsevier Science Publishers B.V., pp. 120.Google Scholar
20.Gelenbe, E. (1993). G-networks with triggered customer movement. Journal of Applied Probability 30(3): 742748.Google Scholar
21.Gelenbe, E. (1993). Learning in the recurrent random network. Neural Computation 5: 154164.Google Scholar
22.Gelenbe, E. (1994). G-networks: a unifying model for neural and queueing networks. Annals of Operations Research 48(5): 433461.CrossRefGoogle Scholar
23.Gelenbe, E. (2007). Steady-state solution of probabilistic gene regulatory networks. Physical Review E 76(1): 031903.Google Scholar
24.Gelenbe, E., Ghanwani, A. & Srinivasan, V. (1997). Improved neural heuristics for multicast routing. IEEE Journal of Selected Areas of Communications 15(2): 147155.Google Scholar
25.Gelenbe, E. & Han, Q. (2014). Near-optimal emergency evacuation with rescuer allocation. In 2014 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), IEEE, 24–28 March 2014, Budapest, Hungary, pp. 314–319.Google Scholar
26.Gelenbe, E. & Hussain, K. (2002). Learning in the multiple class random neural network. IEEE Transactions on Neural Networks 13(6): 12571267.Google Scholar
27.Gelenbe, E. & Kocak, T. (2004). Wafer surface reconstruction from top-down scanning electron microscope images. Microlectronic Engineering 75: 216233.CrossRefGoogle Scholar
28.Gelenbe, A.N.E. & Lent, R. (2004). Self-aware networks and QoS. Proceedings of the IEEE 92(9): 14781489.CrossRefGoogle Scholar
29.Gelenbe, E. & Timotheou, S. (2008). Random neural networks with synchronised interactions. Neural Computation 20: 23082324.Google Scholar
30.Gelenbe, E. & Timotheou, S. (2008). Synchronised interactions in spiked neuronal networks. The Computer Journal 51(4): 723730.Google Scholar
31.Gelenbe, E., Timotheou, S. & Nicholson, D. (2010). Fast distributed near-optimum assignment of assets to tasks. The Computer Journal 53(9): pp. 13601369.Google Scholar
32.Georgiopoulos, M., Li, C. & Kocak, T. (2011). Learning in the feed-forward random neural network: A critical review. Performance Evaluation 68(4): 361384.Google Scholar
33.Guijarro-Berdinas, B., Fontenla-Romero, O., Perez-Sanchez, B. & Fraguela, P. (2007). A fast semi-linear backpropagation learning algorithm. In Proceedings of the International Conference on Artificial Neural Networks, Porto, Portugal, 9–13 September, Heidelberg: Springer-Verlag, pp. 190–198.Google Scholar
34.Huang, G., Huang, G.-B., Song, S. & You, K. (2015). Trends in extreme learning machines: A review. Neural Networks 61: 3248.Google Scholar
35.Hussain, K. & Moussa, G.S. (2005). Laser intensity vehicle classification system based on random neural network. In Proceedings of the 43rd Annual Southeast Regional Conference, Kennesaw, Georgia, Alabama, USA, 18–19 March, New York: ACM, pp. 31–35.Google Scholar
36.Kim, D., Sra, S. & Dhillon, I. (2006). A new projected quasi-newton approach for the nonnegative least squares problem. Technical Report, Department of Computer Sciences, The University of Texas at Austin, December 2006.Google Scholar
37.Lawson, C.L. & Hanson, R.J. (1987). Solving Least Squares Problems. Philadelphia, PA, USA: Prentice Hall.Google Scholar
38.Likas, A. & Stafylopatis, A. (2000). Training the random neural network using quasi-Newton methods. European Journal of Operational Research 126(2): 331339.CrossRefGoogle Scholar
39.Martello, S. & Toth, P. (1990). Knapsack problems: algorithms and computer implementations. Chichester, West Sussex, England: Wiley.Google Scholar
40.Nocedal, J. & Wright, S.J. (1999). Numerical optimization. New York, USA: Springer-Verlag.Google Scholar
41.Oke, G. & Loukas, G. (2007). A Denial of service detector based on maximum likelihood detection and the random neural network. The Computer Journal 50(6): 717727.Google Scholar
42.Phan, H., Stemberg, M. & Gelenbe, E. (2012). ‘Aligning protein–protein interaction networks using random neural networks. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 4–7 October 2012, Philadelphia, USA, pp. 1–6.Google Scholar
43.Romariz, A. & Gelenbe, E. (2012). Contrastive learning in random neural networks and its relation to gradient-descent learning. In Computer and Information Sciences II (Gelenbe, E., Lent, R., & Sakellari, G., Eds.), London: Springer, pp. 511517.Google Scholar
44.Schmidt, M., van den Berg, E., Friedlander, M.P. & Murphy, K. (2009). Optimizing costly functions with simple constraints: a limited-memory projected quasi-newton algorithm. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS), Florida, USA, 16–18 April 2009, pp. 456–463.Google Scholar
45.Timotheou, S. (2008). Nonnegative least squares learning for the random neural network. In Proceedings of the 18th International Conference on Artificial Neural Networks (ICANN 2008), Prague, Czech Republic, 3–6 September, Berlin, Heidelberg: Springer-Verlag, pp. 195–204.Google Scholar
46.Timotheou, S. (2009). A novel weight initialization method for the random neural network. to appear in Neurocomputing 73(1–3): 160168.Google Scholar
47.Timotheou, S. (2010). The random neural network: a survey. The Computer Journal 53(3): 251267.Google Scholar
48.Yam, J.Y.F. & Chow, T.W.S. (2000). A weight initialization method for improving training speed in feedforward neural network. Neurocomputing 30(1–4): 219232.Google Scholar
49.Yam, Y.F., Chow, T.W.S. & Leung, C.T. (1997). A new method in determining initial weights of feedforward neural networks for training enhancement. Neurocomputing 16(1): 2332.Google Scholar