Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-27T23:43:28.061Z Has data issue: false hasContentIssue false

Linear Regression to Minimize the Total Error of the Numerical Differentiation

Published online by Cambridge University Press:  31 January 2018

Jengnan Tzeng*
Affiliation:
Department of Mathematical Science, National Chengchi University, Taipei, No. 64, Sec. 2, ZhiNan Rd., Wenshan District, Taipei City 11605, Taiwan (R.O.C)
*
*Corresponding author. Email address:[email protected] (J. Tzeng)
Get access

Abstract

It is well known that numerical derivative contains two types of errors. One is truncation error and the other is rounding error. By evaluating variables with rounding error, together with step size and the unknown coefficient of the truncation error, the total error can be determined. We also know that the step size affects the truncation error very much, especially when the step size is large. On the other hand, rounding error will dominate numerical error when the step size is too small. Thus, to choose a suitable step size is an important task in computing the numerical differentiation. If we want to reach an accuracy result of the numerical difference, we had better estimate the best step size. We can use Taylor Expression to analyze the order of truncation error, which is usually expressed by the big O notation, that is, E(h) = Chk. Since the leading coefficient C contains the factor f(k)(ζ) for high order k and unknown ζ, the truncation error is often estimated by a roughly upper bound. If we try to estimate the high order difference f(k)(ζ), this term usually contains larger error. Hence, the uncertainty of ζ and the rounding errors hinder a possible accurate numerical derivative.

We will introduce the statistical process into the traditional numerical difference. The new method estimates truncation error and rounding error at the same time for a given step size. When we estimate these two types of error successfully, we can reach much better modified results. We also propose a genetic approach to reach a confident numerical derivative.

MSC classification

Type
Research Article
Copyright
Copyright © Global-Science Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Barlow, J. L., Numerical aspects of solving linear least squares problems. In Rao, C.R. Computational Statistics. Handbook of Statistics. 9. North-Holland. ISBN 0-444-88096-8, 1993.Google Scholar
[2] Butt, R., Introduction to Numerical Analysis Using MATLAB, Jones & Bartlett Learning, pp. 1118, 2009.Google Scholar
[3] Chapra, S. C. and Canale, R. P., Numerical Methods for Engineers, McGraw-Hill, 7-th Edition, 2015.Google Scholar
[4] Nocedal, J. and Wright, S. J., Numerical Optimization, Springer-Verlag New York, Inc, 1999.Google Scholar
[5] Richardson, L. F. and Gaunt, J. A., The diferred approach to the limit, Philosophical Transactions of the Royal Society A 226, 299-349, 1927.Google Scholar
[6] Ueberhuber, C. W., Numerical Computation 1: Methods, Software, and Analysis, Springer, pp. 139146, 1997.Google Scholar
[7] Yang, W. Y., Cao, W., Chung, T. S., and Morris, J., Applied Numerical Methods Using MATLAB, John Wiley & Sons, 2005.Google Scholar
[8] Curtis, A. and Reid, J. The choice of step lengths when using differences to approximate Jacobian matrices, J. Inst. Math. Appl., v. 13, 121126, 1974.Google Scholar