Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-23T22:32:57.720Z Has data issue: false hasContentIssue false

IDENTIFIABILITY OF RECURRENT NEURAL NETWORKS

Published online by Cambridge University Press:  04 August 2003

A.A. Al-Falou
Affiliation:
Vienna University of Technology
D. Trummer
Affiliation:
Vienna University of Technology

Abstract

We examine the identifiability of a nonlinear state space system under general assumptions. The discrete time evolution of the state is generated by a recurrent Elman network. For a large set of Elman networks we determine the class of observationally equivalent minimal systems, i.e., minimal systems that exhibit the same input-output behavior.The authors are grateful for discussions with M. Deistler and D. Bauer. A.A. Al-Falou was funded by the ERNSI network within the European Union program Training and Mobility of Researchers (TMR). D. Trummer was funded by the FWF (Austrian Science Fund). We thank an anonymous referee for helpful comments and suggestions.

Type
Research Article
Copyright
© 2003 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Albertini, F. (1993) Controllability of Discrete-Time Nonlinear Systems and Some Related Topics. PhD thesis, New Brunswick, New Jersey.
Albertini, F. & P. Dai Pra (1995) Recurrent neural networks: Identification and other system theoretic properties. Neural Network Systems Techniques and Applications 3, 141.Google Scholar
Albertini, F. & E. Sontag (1993) For neural networks, function determines form. Neural Networks 6, 975990.Google Scholar
Brockwell, P.J. & R.A. Davis (1991) Time Series: Theory and Methods. Berlin: Springer.
Dörfler, M. & M. Deistler (1998) A structure theory for identification of recurrent neural networks, part 1. In H.J.C. Huijberts, H. Nijmeijer, A.J. Van der Schaft, & J.M.A. Scherpen (eds.), Proceedings of the 4th IFAC Nonlinear Control Systems Design Symposium, pp. 459464. Amsterdam: Elsevier.
Draisma, G. & P.H. Franses (1997) Recognizing changing seasonal patterns using artificial neural networks. Journal of Econometrics 81, 273280.Google Scholar
Elman, J.L. (1989) Structured representations and connectionist models. In G. Olson & E. Smith (eds.), Proceedings of the 11th Annual Conference of the Cognitive Science Society, pp. 1725. Hillsdale, NJ: Erlbaum.
Elman, J.L. (1991) Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning 7, 195226.Google Scholar
Gencay, R. & R. Garcia (2000) Pricing and hedging derivative securities with neural networks and a homogeneity hint. Journal of Econometrics 94, 93115.Google Scholar
Granger, C.W.J., T.-H. Lee, & H. White (1993) Testing for neglected nonlinearity in time series models. Journal of Econometrics 56, 269290.Google Scholar
Hannan, H.J. & M. Deistler (1988) The Statistical Theory of Linear Systems. New York: Wiley.
Leisch, F., A. Trapletti, & K. Hornik (1999) Stationarity and stability of autoregressive neural network processes. In M.S. Kearns, S.A. Solla, & D.A. Cohn (eds.), Advances in Neural Information Processing Systems, vol. 11, pp. 267273. Cambridge, MA: MIT Press.
Richards, C.E. & B.D. Baker (1999) A comparison of conventional linear regression methods and neural networks for forecasting educational spending. Economics of Education Review 18, 405415.Google Scholar
Tkacz, G. (2001) Neural network forecasting of Canadian gdp growth. International Journal of Forecasting 17, 5769.Google Scholar
Trapletti, A., F. Leisch, & K. Hornik (1998) Stationarity and Integrated Autoregressive Neural Network Processes. Working paper, Sonderforschungsbereich 10.
Trapletti, A., F. Leisch, & K. Hornik (1999) On the Ergodicity and Stationarity of the ARMA(1,1) Recurrent Neural Network Process. Working paper 24, Sonderforschungsbereich 10.