Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-27T22:59:05.505Z Has data issue: false hasContentIssue false

ACCURATE, ENERGY-EFFICIENT CLASSIFICATION WITH SPIKING RANDOM NEURAL NETWORK

Published online by Cambridge University Press:  21 May 2019

Khaled F. Hussain
Affiliation:
Computer Science Department, Faculty of Computers and Information, Assiut University, Asyut71515, Egypt E-mail: [email protected]; [email protected]
Mohamed Yousef Bassyouni
Affiliation:
Computer Science Department, Faculty of Computers and Information, Assiut University, Asyut71515, Egypt E-mail: [email protected]; [email protected]
Erol Gelenbe
Affiliation:
Intelligent Systems and Networks Group, Department of Electrical and Electronic Engineering, Imperial College London, SW7 2BT, UK E-mail: [email protected]

Abstract

Artificial Neural Networks (ANNs)-based techniques have dominated state-of-the-art results in most problems related to computer vision, audio recognition, and natural language processing in the past few years, resulting in strong industrial adoption from all leading technology companies worldwide. One of the major obstacles that have historically delayed large-scale adoption of ANNs is the huge computational and power costs associated with training and testing (deploying) them. In the mean-time, Neuromorphic Computing platforms have recently achieved remarkable performance running the bio-realistic Spiking Neural Networks at high throughput and very low power consumption making them a natural alternative to ANNs. Here, we propose using the Random Neural Network, a spiking neural network with both theoretical and practical appealing properties, as a general purpose classifier that can match the classification power of ANNs on a number of tasks while enjoying all the features of being a spiking neural network. This is demonstrated on a number of real-world classification datasets.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Abbott, L.F. (1999). Lapicque's introduction of the integrate-and-fire model neuron (1907). Brain Research Bulletin 50(5-6): 303304.CrossRefGoogle Scholar
2.Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Cheng, Q., Chen, G., Chen, J., Chen, J., Chen, Z., Chrzanowski, M., Coates, A., Diamos, G., Ding, K., Du, N., Elsen, E., Engel, J., Fang, W., Fan, L., Fougner, C., Gao, L., Gong, C., Hannun, A., Han, T., Johannes, L.V., Jiang, B., Ju, C., Jun, B., LeGresley, P., Lin, L., Liu, J., Liu, Y., Li, W., Li, X., Ma, D., Narang, S., Ng, A., Ozair, S., Peng, Y., Prenger, R., Qian, S., Quan, Z., Raiman, J., Rao, V., Satheesh, S., Seetapun, D., Sengupta, S., Srinet, K., Sriram, A., Tang, H., Tang, L., Wang, C., Wang, J., Wang, K., Wang, Y., Wang, Z., Wang, Z., Wu, S., Wei, L., Xiao, B., Xie, W., Xie, Y., Yogatama, D., Yuan, B., Zhan, J., & Zhu, Z. (2016). Deep speech 2: End-to-end speech recognition in english and mandarin. In International Conference on Machine Learning, pp. 173182.Google Scholar
3.Bengio, Y., Ducharme, R., Vincent, P., & Jauvin, C. (2003). A neural probabilistic language model. Journal of Machine Learning Research 3: 11371155.Google Scholar
4.Bordes, A., Chopra, S., & Weston, J. (2014). Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676.Google Scholar
5.Brun, O., Wang, L., & Gelenbe, E. (2016). Big data for autonomic intercontinental overlays. IEEE Journal on Selected Areas in Communications 34(3): 575583.CrossRefGoogle Scholar
6.Brun, O., Yin, Y., Gelenbe, E., Kadioglu, Y.M., Augusto-Gonzalez, J., & Ramos, M. (2018). Deep learning with dense random neural networks for detecting attacks against iot-connected home environments. In Security in Computer and Information Sciences: First International ISCIS Security Workshop 2018 (Euro-CYBERSEC 2018), London, UK, 26–27 February 2018. Lecture Notes CCIS No. 821, Springer-Verlag.Google Scholar
7.Caporale, N. & Dan, Y. (2008). Spike timing–dependent plasticity: a hebbian learning rule. Annual Review of Neuroscience 31: 2546.CrossRefGoogle ScholarPubMed
8.Cramer, C. & Gelenbe, E. (2000). Video quality and traffic qos in learning-based subsampled and receiver-interpolated video sequences. IEEE Journal on Selected Areas in Communications 18(2): 150167.CrossRefGoogle Scholar
9.Davies, M., Srinivasa, N., Lin, T.H., Chinya, G., Cao, Y., Choday, S.H., Dimou, G., Joshi, P., Imam, N., Jain, S., Liao, Y., Lin, C.-K., Lines, A., Liu, R., Mathaikutty, D., McCoy, S., Paul, A., Tse, J., Venkataramanan, G., Weng, Y.-H., Wild, A., Yang, Y., & Wang, H. (2018). Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1): 8299.CrossRefGoogle Scholar
10.Diehl, P.U., Neil, D., Binas, J., Cook, M., Liu, S.C., & Pfeiffer, M. (2015). Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In IEEE 2015 International Joint Conference on Neural Networks (IJCNN), pp. 18.CrossRefGoogle Scholar
11.Fernández-Delgado, M., Cernadas, E., Barro, S., & Amorim, D. (2014). Do we need hundreds of classifiers to solve real world classification problems?. The Journal of Machine Learning Research 15(1): 31333181.Google Scholar
12.Fourneau, J.M. & Gelenbe, E. (2017). G-networks with adders. Future Internet 9(3): 34.CrossRefGoogle Scholar
13.Fourneau, J., Marin, A., & Balsamo, S. (2016). Modeling energy packets networks in the presence of failures. In24th IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2016), London, United Kingdom, 19–21 September 2016, pp. 144153.CrossRefGoogle Scholar
14.François, F. & Gelenbe, E. (2016). Optimizing secure sdn-enabled inter-data centre overlay networks through cognitive routing. MASCOTS: IEEE Computer Society, pp. 283288.Google Scholar
15.François, F. & Gelenbe, E. (2016). Towards a cognitive routing engine for software defined networks. In IEEE The International Conference on Communications (ICC), pp. 16.CrossRefGoogle Scholar
16.Furber, S.B., Galluppi, F., Temple, S., & Plana, L.A. (2014). The spinnaker project. Proceedings of the IEEE 102(5): 652665.CrossRefGoogle Scholar
17.Gelenbe, E. & Ceran, E.T. (2016). Energy packet networks with energy harvesting. IEEE Access 4: 13211331.CrossRefGoogle Scholar
18.Gelenbe, E. & Cramer, C. (1998). Oscillatory corticothalamic response to somatosensory input. Biosystems 48(1-3): 6775.CrossRefGoogle ScholarPubMed
19.Gelenbe, E. (1989). Random neural networks with negative and positive signals and product form solution. Neural Computation 1(4): 502510.CrossRefGoogle Scholar
20.Gelenbe, E. (1991). Product-form queueing networks with negative and positive customers. Journal of Applied Probability 28(3): 656663.CrossRefGoogle Scholar
21.Gelenbe, E. (1993). G-networks by triggered customer movement. Journal of Applied Probability 30(3): 742748.CrossRefGoogle Scholar
22.Gelenbe, E. (1993). G-networks with signals and batch removal. Probability in the Engineering and Informational Sciences 7(3): 335342.CrossRefGoogle Scholar
23.Gelenbe, E. (1993). Learning in the recurrent random neural network. Neural Computation 5(1): 154164.CrossRefGoogle Scholar
24.Gelenbe, E. (2007). Steady-state solution of probabilistic gene regulatory networks. Physical Review E 76, 031903.CrossRefGoogle ScholarPubMed
25.Gelenbe, E. (2009). Steps toward self-aware networks. Communications of the ACM 52(7): 6675.CrossRefGoogle Scholar
26.Gelenbe, E. (2012). Energy packet networks: Adaptive energy management for the cloud. In CloudCP '12 Proceedings of the 2nd International Workshop on Cloud Computing Platforms (ACM 2012), p. 1.CrossRefGoogle Scholar
27.Gelenbe, E. & Abdelrahman, O.H. (2018). An energy packet network model for mobile networks with energy harvesting. Nonlinear Theory and Its Applications, IEICE 9(3): 115.CrossRefGoogle Scholar
28.Gelenbe, E. & Kazhmaganbetova, Z. (2014). Cognitive packet network for bilateral asymmetric connections. IEEE Transactions on Industrial Informatics 10(3): 17171725.CrossRefGoogle Scholar
29.Gelenbe, E. & Marin, A. (2015). Interconnected wireless sensors with energy harvesting. In Proceedings 22nd International Conference Analytical and Stochastic Modelling Techniques and Applications (ASMTA 2015), Albena, Bulgaria, 26–29 May 2015, pp. 8799.CrossRefGoogle Scholar
30.Gelenbe, E. & Morfopoulou, C. (2010). A framework for energy-aware routing in packet networks. The Computer Journal 54(6): 850859.CrossRefGoogle Scholar
31.Gelenbe, E. & Schassberger, R. (1992). Stability of product form g-networks. Probability in the Engineering and Informational Sciences 6(3): 271276.CrossRefGoogle Scholar
32.Gelenbe, E. & Yin, Y. (2016). Deep learning with random neural networks. In IEEE 2016 International Joint Conference on Neural Networks (IJCNN), pp. 16331638.CrossRefGoogle Scholar
33.Gelenbe, E., Glynn, P., & Sigman, K. (1991). Queues with negative arrivals. Journal of Applied Probability 28(1): 245250.CrossRefGoogle Scholar
34.Gelenbe, E., Feng, Y., & Krishnan, K.R.R. (1996). Neural network methods for volumetric magnetic resonance imaging of the human brain. Proceedings of the IEEE 84(10): 14881496.CrossRefGoogle Scholar
35.Gelenbe, E., Sungur, M., Cramer, C., & Gelenbe, P. (1996). Traffic and video quality with adaptive neural compression. Multimedia Systems 4(6): 357369.CrossRefGoogle Scholar
36.Gelenbe, E., Mao, Z.H., & Li, Y.D. (1999). Function approximation with spiked random networks. IEEE Transactions on Neural Networks 10(1): 39.CrossRefGoogle ScholarPubMed
37.Gelenbe, E., Mao, Z.H., & Li, Y.D. (2006). Function approximation by random neural networks with a bounded number of layers. In Computer System Performance Modeling In Perspective: A Tribute to the Work of Prof Kenneth C Sevcik, pp. 3558. World Scientific.CrossRefGoogle Scholar
38.Gerstner, W., Kistler, W.M., Naud, R., & Paninski, L. (2014). Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
39.Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580587.CrossRefGoogle Scholar
40.Graves, A., Liwicki, M., Fernández, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2009). A novel connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(5): 855868.CrossRefGoogle ScholarPubMed
41.Graves, A., Mohamed, A.r., & Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 66456649.CrossRefGoogle Scholar
42.Grenet, I., Yin, Y., Comet, J.P., & Gelenbe, E. (2018). Machine learning to predict toxicity of compounds. In 27th Annual International Conference on Artificial Neural Networks (ICANN18). Springer-Verlang, accepted for publication.CrossRefGoogle Scholar
43.Hai, F., Hussain, K.F., Gelenbe, E., & Guha, R.K. (2001). Video compression with wavelets and random neural network approximations. In Applications of Artificial Neural Networks in Image Processing VI Vol. 4305, pp. 5765. International Society for Optics and Photonics.CrossRefGoogle Scholar
44.He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770778.CrossRefGoogle Scholar
45.He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 29802988.CrossRefGoogle Scholar
46.Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A.r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29(6): 8297.CrossRefGoogle Scholar
47.Hodgkin, A.L. & Huxley, A.F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology 117(4): 500544.CrossRefGoogle ScholarPubMed
48.Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks 2(5): 359366.CrossRefGoogle Scholar
49.Hussain, K.F. & Moussa, G.S. (2016). On-road vehicle classification based on random neural network and bag-of-visual words. Probability in the Engineering and Informational Sciences 30(3): 403412.CrossRefGoogle Scholar
50.Izhikevich, E.M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks 14(6): 15691572.CrossRefGoogle ScholarPubMed
51.Jean, S., Cho, K., Memisevic, R., & Bengio, Y. (2014). On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007.Google Scholar
52.Jolivet, R., Timothy, J., & Gerstner, W. (2003). The spike response model: a framework to predict neuronal spike trains. In Artificial Neural Networks and Neural Information Processing–ICANN/ICONIP 2003, Springer, pp. 846853.CrossRefGoogle Scholar
53.Kim, H. & Gelenbe, E. (2012). Stochastic gene expression modeling with hill function for switch-like gene responses. IEEE/ACM Transactions on Computational Biology and Bioinformatics 9(4): 973979.Google ScholarPubMed
54.Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 10971105.Google Scholar
55.LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature 521(7553): 436.CrossRefGoogle ScholarPubMed
56.Lichman, M. & Bache, K. (2013). UCI machine learning repository.Google Scholar
57.Liu, S.C. & Delbruck, T. (2010). Neuromorphic sensory systems. Current Opinion in Neurobiology 20(3): 288295.CrossRefGoogle ScholarPubMed
58.Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 34313440.CrossRefGoogle Scholar
59.Markram, H., Gerstner, W., & Sjöström, P.J. (2011). A history of spike-timing-dependent plasticity. Frontiers in Synaptic Neuroscience 3: 4.CrossRefGoogle ScholarPubMed
60.Mead, C. (1990). Neuromorphic electronic systems. Proceedings of the IEEE 78(10): 16291636.CrossRefGoogle Scholar
61.Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan, F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y., Brezzo, B., Vo, I., Esser, S.K., Appuswamy, R., Taba, B., Amir, A., Flickner, M.D., Risk, W.P., Manohar, R., & Modha, D.S. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197): 668673.CrossRefGoogle Scholar
62.Neil, D., Pfeiffer, M., & Liu, S.C. (2016). Learning to be efficient: algorithms for training low-latency, low-compute deep spiking neural networks. In Proceedings of the 31st Annual ACM Symposium on Applied Computing, ACM, pp. 293298.CrossRefGoogle Scholar
63.O'Connor, P., Neil, D., Liu, S.C., Delbruck, T., & Pfeiffer, M. (2013). Real-time classification and sensor fusion with a spiking deep belief network. Frontiers in Neuroscience 7: 178.CrossRefGoogle ScholarPubMed
64.Petricoin, E.F. III, Ardekani, A.M., Hitt, B.A., Levine, P.J., Fusaro, V.A., Steinberg, S.M., Mills, G.B., Simone, C., Fishman, D.A., Kohn, E.C., & Liotta, L.A. (2002). Use of proteomic patterns in serum to identify ovarian cancer. The Lancet 359(9306): 572577.CrossRefGoogle ScholarPubMed
65.Phan, H.T.T., Sternberg, M.J.E., & Gelenbe, E. (2012). Aligning protein-protein interaction networks using random neural networks. In 2012 IEEE International Conference on Bioinformatics and Biomedicine (BIBM 2012), Philadelphia, PA, USA, 4–7 October 2012, pp. 16.CrossRefGoogle Scholar
66.Raina, R., Madhavan, A., & Ng, A.Y. (2009). Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th Annual International Conference on Machine Learning, ACM, pp. 873880.CrossRefGoogle Scholar
67.Ranjan, R., Patel, V.M., & Chellappa, R. (2017). Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 41: 121135.CrossRefGoogle ScholarPubMed
68.Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779788.CrossRefGoogle Scholar
69.Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review 65(6): 386.CrossRefGoogle Scholar
70.Sakellari, G. & Gelenbe, E. (2010). Demonstrating cognitive packet network resilience to worm attacks. In Proceedings of the 17th ACM Conference on Computer and Communications Security, (CCS 2010), Chicago, Illinois, USA, 4–8 October 2010, pp. 636638.CrossRefGoogle Scholar
71.Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815823.CrossRefGoogle Scholar
72.Serrano-Gotarredona, T. & Linares-Barranco, B. (2013). A 128 × 128 1.5% contrast sensitivity 0.9% fpn 3 μs latency 4 mw asynchronous frame-free dynamic vision sensor using transimpedance preamplifiers. Journal of Solid-State Circuits 48(3): 827838.CrossRefGoogle Scholar
73.Shi, B., Bai, X., & Yao, C. (2017). An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(11): 22982304.CrossRefGoogle ScholarPubMed
74.Sutskever, I., Vinyals, O., & Le, Q.V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pp. 31043112.Google Scholar
75.Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19.CrossRefGoogle Scholar
76.Tavanaei, A., Ghodrati, M., Kheradpisheh, S.R., Masquelier, T., & Maida, A.S. (2018). Deep learning in spiking neural networks. arXiv preprint arXiv:1804.08150.Google Scholar
77.Vanarse, A., Osseiran, A., & Rassau, A. (2016). A review of current neuromorphic approaches for vision, auditory, and olfactory sensors. Frontiers in Neuroscience 10: 115.CrossRefGoogle ScholarPubMed
78.Von Neumann, J. (1993). First draft of a report on the edvac. IEEE Annals of the History of Computing 4: 2775.CrossRefGoogle Scholar
79.Wang, L. & Gelenbe, E. (2018). Adaptive dispatching of tasks in the cloud. IEEE Transactions on Cloud Computing 6(1): 3345.CrossRefGoogle Scholar
80.Werbos, P. (1974). Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. dissertation, Harvard University.Google Scholar
81.Yin, Y. (2018). Random neural networks for deep learning. Ph.D. thesis, Imperial College, London.Google Scholar
82.Yin, Y. & Gelenbe, E. (2017). Single-cell based random neural network for deep learning. In 2017 IEEE International Joint Conference on Neural Networks (IJCNN), pp. 8693.CrossRefGoogle Scholar
83.Zhao, W., Agnus, G., Derycke, V., Filoramo, A., Bourgoin, J., & Gamrat, C. (2010). Nanotube devices based crossbar architecture: toward neuromorphic computing. Nanotechnology 21(17): 175202.Google ScholarPubMed