/ / Article

Research Article | Open Access

Volume 2014 |Article ID 354237 | https://doi.org/10.1155/2014/354237

Özgür Yeniay, Öznur İşçi, Atilla Göktaş, M. Niyazi Çankaya, "Time Scale in Least Square Method", Abstract and Applied Analysis, vol. 2014, Article ID 354237, 6 pages, 2014. https://doi.org/10.1155/2014/354237

# Time Scale in Least Square Method

Accepted06 Mar 2014
Published03 Apr 2014

#### Abstract

Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

#### 1. Introduction

Although theoretical to a high extent, time scale tries to link a bridge between continuous and discrete analysis [1, 2]. Such calculations provide an integrated structure for the analysis of difference and differential equations [35]. Those equations [6, 7] have been applied in dynamic programming [813], neural network [10, 14, 15], economic modeling [10, 16], population biology [17], quantum calculus [18], geometric analysis [19], real-time communication networks [20], intelligent robotic control [21], adaptive sampling [22], approximation theory [13], financial engineering [12] on time scales, and switched linear circuits [23] among others.

This study deals with the estimation of parameters of regression equation by using least squares method through time scale.

The main purpose of least squares method is to minimize the sum of squared vertical deviations. For this, to find the coefficients in simple linear regression model , partial derivatives related to coefficients of equation are obtained and the stage takes place, in which normal equations are to be obtained by setting partial derivatives for each coefficient equal to zero. And from normal equations, equation , predictive model for (9) is obtained.

In statistics, least squares method is used in accordance with the known derivative definition when parameters of regression equation are obtained. In this study, time scale derivative definition is applied to the least squares method. In this regard, (9) simple linear regression model is considered and and , estimators of and coefficients, are obtained in accordance with forward and backward jump operators. Different and parameter values are obtained, originating from forward and backward jump operators related to the simple linear regression equation. In cases when observation values are discrete, forward jump operator equation and backward jump operator equation are taken.

The main approach in regression analysis is to minimize the sum of squared (vertical) deviations between actual and estimated values. This is a weighed method for regression analysis due to its statistical properties [24]. Here, the point which should be considered is that, since in least squares method, sum of squared vertical deviations is of interest, analysis would also operate accordingly. The issue of applying first and second derivatives should also be taken into consideration since the aim is to minimize the sum of squared vertical deviations.

The study consists of two main parts and in the last part takes place an implementation regarding time scale theory. Main parts discussed are time scale theory and simple linear regression, respectively. Time scale theory part includes an explanation of time scale derivative related to and the definitions of forward and backward jump operators. The other main part includes explanations to simple linear regression model being in the form of (9) and to the calculation of and , estimators of and , by using the method of least squares. Time scale derivative definition includes normal equations for forward and backward jump operators, as well as and values, which are and estimators of forward and backward jump operators.

#### 2. Time Scale Preliminaries

Bohner and Peterson [1] suggested the concept of time scale in their studies. Their purpose was to bring together discrete analysis and continuous analysis under one model. Any closed nonempty subset of real numbers is called as time scale and is indicated with symbol . Thus, , real numbers, integers, natural numbers, and positive natural numbers, respectively, are examples of time scale and considered as , ; meaning and meaning = .

, , , , rational numbers, irrational numbers, complex numbers, and open interval of 0 and 1, respectively, are not included in time scale. Since time scale is closed, it can clearly be seen that rational numbers are never a time scale.

Delta derivative for function defined at is actually defined as follows.(i)If , , then it is a general derivative.(ii)If , , then it is a forward difference operator.

Definition 1. Given the function and , there exists a neighborhood of and such that is defined for all , and called as delta derivative of at [1, 25, 26].

Definition 2. Given the function and , there exists a neighborhood of and such that is defined for all , and called as nabla derivative of at [16, 26].

Definition 3. Given the function and , there exists a neighborhood of and such that is defined for all , and called as center derivative of at [27].

Definition 4 (forward jump operator [1]). Let be a time scale. Then, for , forward jump operator is defined by For right-graininess function for all , is defined as follows:

Definition 5 (backward jump operator [1]). Let be a time scale. Then, for , backward jump operator is defined by Left-graininess function is defined as follows:

#### 3. Simple Linear Regression Analysis

Simple linear regression [28, 29] consists of one single explanatory variable or independent variable or response variable or dependent variable. Let us take that a real relationship exists between and and values observed at each level are random. Expected value of at each level is defined by the equation where is the interceptor and is the slope and and are unknown regression coefficients. Each observation can be defined with the model below.

In equation is the random error term with zero mean constitution of (unknown) variance .

Take binary observations of . Figure 1 shows the distribution diagram of observed values and the estimated regression line. Estimators of and would pass the “best line” through the data. German scientist Karl Gauss (1777–1855) made a suggestion in estimation of parameters and in (9) to minimize the sum of squared vertical deviations in Figure 1.

This criterion used in estimation of regression parameters is the least squares method. By using (9), observations in the sample can be defined as follows: and sum of squared deviations of observations from the actual regression line is as follows: Least square estimators of and are and , which may be calculated as follows: When these two equations are simplified These equations may be called as normal equations of least squares.

Thus, estimated regression line will be as follows: Each observation pair is provided with the relation below: where : residuals; : observation values; : estimated values.

#### 4. Ordinary Least Square Method

##### 4.1. Normal Equations

These include normal situation and formula values of and estimators for forward and backward operators.

Normal equations in normal (usual) situations Time scale normal equations (forward jump operator) Time scale normal equations (backward jump operator)

##### 4.2. Graphs with Data 10-20-30-40-50 of Normal Situation, Forward and Backward Jump Operator

Respective illustrations of simple linear regression equations obtained from samples of 10, 20, 30, 40, and 50 data for normal situation forward and backward jump operators are shown in Figure 2.

Sum of vertical distances between simple linear regression model for normal and forward and backward operators and observation values for sample sizes of = 10, 20, 30, 40, and 50 are provided in Table 1.

 For Simple linear regression equation Sum of vertical deviations Observation Estimation Normal 3.27 + 1.01x 0 62 62 Fwd 7.42 − 0.59x 5 62 57 Bckwd 62 67 For Simple linear regression equation Sum of vertical deviations Observation Estimation Normal 0 155 155 Fwd 10 155 145 Bckwd 155 165 For Simple linear regression equation Sum of vertical deviations Observation Estimation Normal 0 291 291 Fwd 15 291 276 Bckwd 291 306 For Simple linear regression equation Sum of vertical deviations Observation Estimation Normal 0 448 448 Fwd 20 448 428 Bckwd 448 468 For Simple linear regression equation Sum of vertical deviations Observation Estimation Normal 0 638 638 Fwd 25 638 613 Bckwd 638 663

There is a relationship between sample size and sum of vertical distances between regression line and observation values . This is derived from the result of sum of vertical distances between regression lines equal to half of sample size and observation values . Results of implementation can be seen in Table 1.

##### 4.3. Minimum Test

Let , the critical point; meaning: = 0 and If and , has a minimum value. Minimum test calculated according to normal derivative definition is as follows [30]: Minimum test calculated according to time scale derivative definition for forward and backward jump operators is as follows: Since and always, always has a minimum value.

#### 5. Results

In statistics, least squares method is used in accordance with the known derivative definition when parameters of regression equation are obtained. In the study, time scale derivative definition is applied to the least squares method. In this regard, (9) simple linear regression model is considered and both and , estimators of and coefficients, are obtained in accordance with forward and backward jump operators. Different and parameter values are obtained, originating from forward and backward jump operators related to the simple linear regression equation. In cases when observation values are discrete, forward jump operator equation and backward jump operator equation are taken [31].

The study includes the analysis of integers in accordance with time scale derivative definition. Standard approach in regression analysis is to minimize the sum of squared (vertical) deviations between actual and estimated values. Here, the point which should be considered is that, since in least squares method, sum of squared vertical deviations is of interest, analysis would also operate accordingly. The issue of applying first and second derivatives should also be taken into consideration since the aim is to minimize the sum of squared vertical deviations. Since and always, always has a minimum value.

Least square method yields results such that sum of vertical deviations is minimum. When least squares method is used according to time scale derivative definition, a relationship emerges between sample size and sum of vertical distances between regression line and observation values . This is derived from the sum of vertical distances between regression lines equal to half of sample size and observation values . To minimize the sum of vertical distances, different and parameter values resulting from forward and backward jump operators for simple linear regression equation have been of interest. An alternative solution is to create a regression equation in time scale. As a conclusion, time scale derivative definition is applied to the study with integers and suggested solutions are proposed for the results obtained.

#### 6. Discussion and Possible Future Studies

This study is only introducing a very basic derivative concept from the time scale and applying for obtaining the regression parameters. An extension of this study would be determining the integer apart from the value 1. This would be useful especially when the actual assumptions of linear regression model have been violated and need a robust estimation of linear regression line. Perhaps it is hard to determine an optimum of analytically. We would suggest that the study should be performed for generated data containing outliers that is replicated at least 1000 times.

It is also possible to extend the number of regressors from a single explanatory variable to multiple ones in the linear regression model and estimate the parameters using time scale of both forward and backward jump operators. Analytically speaking, it will not be easy to estimate the parameters in a simple form of equation using time scale. In fact we would be dealing with very complicated either formulas or matrices. However it may be worth extending the study to a multiple linear regression model.

In the meantime taking to be 1 results with the sum of vertical distances between regression lines equal to half of sample size (See Table 1 and Figure 2). The value in both forward and backward jump operators should be determined in a way that the estimated regression line from both forward and backward operator is fairly close to the actual regression line especially when the assumptions of linear regression model are violated.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

1. M. Bohner and A. Peterson, Dynamic Equations on Time Scales: An Introduction with Applications, Birkhäuser, Boston, Mass, USA, 2001. View at: Publisher Site | MathSciNet
2. W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, John Wiley & Sons, New York, NY, USA, 2011. View at: Publisher Site | MathSciNet
3. E. Girejko, A. B. Malinowska, and D. F. M. Torres, “The contingent epiderivative and the calculus of variations on time scales,” Optimization: A Journal of Mathematical Programming and Operations Research, vol. 61, no. 3, pp. 251–264, 2012.
4. N. H. Du and N. T. Dieu, “The first attempt on the stochastic calculus on time scale,” Stochastic Analysis and Applications, vol. 29, no. 6, pp. 1057–1080, 2011.
5. R. Almeida and D. F. M. Torres, “Isoperimetric problems on time scales with nabla derivatives,” Journal of Vibration and Control, vol. 15, no. 6, pp. 951–958, 2009.
6. M. Bohner and A. Peterson, Eds., Advances in Dynamic Equations on Time Scales, Birkhäuser, Boston, Mass, USA, 2003.
7. M. Bohner, “Calculus of variations on time scales,” Dynamic Systems and Applications, vol. 13, no. 3-4, pp. 339–349, 2004.
8. J. Seiffertt, S. Sanyal, and D. C. Wunsch, “Hamilton-Jacobi-Bellman equations and approximate dynamic programming on time scales,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 38, no. 4, pp. 918–923, 2008. View at: Publisher Site | Google Scholar
9. Z. Han, S. Sun, and B. Shi, “Oscillation criteria for a class of second-order Emden-Fowler delay dynamic equations on time scales,” Journal of Mathematical Analysis and Applications, vol. 334, no. 2, pp. 847–858, 2007.
10. C. C. Tisdell and A. Zaidi, “Basic qualitative and quantitative results for solutions to nonlinear, dynamic equations on time scales with an application to economic modelling,” Nonlinear Analysis: Theory, Methods & Applications, vol. 68, no. 11, pp. 3504–3524, 2008.
11. C. Lizama and J. G. Mesquita, “Almost automorphic solutions of dynamic equations on time scales,” Journal of Functional Analysis, vol. 265, no. 10, pp. 2267–2311, 2013.
12. S. Sanyal, Stochastic Dynamic Equations [PhD dissertation], Missouri University of Science and Technology, Rolla, Mo, USA, 2008.
13. Q. Sheng, M. Fadag, J. Henderson, and J. M. Davis, “An exploration of combined dynamic derivatives on time scales and their applications,” Nonlinear Analysis: Real World Applications, vol. 7, no. 3, pp. 395–413, 2006.
14. X. Chen and Q. Song, “Global stability of complex-valued neural networks with both leakage time delay and discrete time delay on time scales,” Neurocomputing, vol. 121, no. 9, pp. 254–264, 2013. View at: Publisher Site | Google Scholar
15. A. Chen and F. Chen, “Periodic solution to BAM neural network with delays on time scales,” Neurocomputing, vol. 73, no. 1–3, pp. 274–282, 2009. View at: Publisher Site | Google Scholar
16. F. M. Atici, D. C. Biles, and A. Lebedinsky, “An application of time scales to economics,” Mathematical and Computer Modelling, vol. 43, no. 7-8, pp. 718–726, 2006.
17. M. Bohner, M. Fan, and J. Zhang, “Periodicity of scalar dynamic equations and applications to population models,” Journal of Mathematical Analysis and Applications, vol. 330, no. 1, pp. 1–9, 2007.
18. M. Bohner and T. Hudson, “Euler-type boundary value problems in quantum calculus,” International Journal of Applied Mathematics & Statistics, vol. 9, no. J07, pp. 19–23, 2007.
19. G. Sh. Guseinov and E. Özyılmaz, “Tangent lines of generalized regular curves parametrized by time scales,” Turkish Journal of Mathematics, vol. 25, no. 4, pp. 553–562, 2001.
20. I. Gravagne, R. J. Marks II, J. Davis, and J. DaCunha, “Application of time scales to real world time communications networks,” in Proceedings of the American Mathematical Society Western Section Meeting, 2004. View at: Google Scholar
21. I. A. Gravagne, J. M. Davis, and R. J. Marks, “How deterministic must a real-time controller be?” in Proceedings of the IEEE IRS/RSJ International Conference on Intelligent Robots and Systems (IROS '05), pp. 3856–3861, Edmonton, Canada, August 2005. View at: Publisher Site | Google Scholar
22. I. Gravagne, J. Davis, J. DaCunha, and R. J. Marks II, “Bandwidth reduction for controller area networks using adaptive sampling,” in Proceedings of the International Conference on Robotics and Automation, pp. 2–6, New Orleans, La, USA, April 2004. View at: Google Scholar
23. R. J. Marks II, I. A. Gravagne, J. M. Davis, and J. J. DaCunha, “Nonregressivity in switched linear circuits and mechanical systems,” Mathematical and Computer Modelling, vol. 43, no. 11-12, pp. 1383–1392, 2006.
24. J. S. Armstrong, Principles of Forecasting: A Handbook for Researchers and Practitioners, Kluwer Academic, Dordrecht, The Netherlands, 2001. View at: Publisher Site
25. D. R. Anderson and J. Hoffacker, “Green's function for an even order mixed derivative problem on time scales,” Dynamic Systems and Applications, vol. 12, no. 1-2, pp. 9–22, 2003.
26. R. Agarwal, M. Bohner, D. O'Regan, and A. Peterson, “Dynamic equations on time scales: a survey,” Journal of Computational and Applied Mathematics, vol. 141, no. 1-2, pp. 1–26, 2002.
27. Q. Sheng and A. Wang, “A study of the dynamic difference approximations on time scales,” International Journal of Difference Equations, vol. 4, no. 1, pp. 137–153, 2009. View at: Google Scholar | MathSciNet
28. J. Neter, M. H. Kutner, C. J. Nachtsheim, and W. Wasserman, Applied Linear Statistical Models, McGraw-Hill, New York, NY, USA, 4th edition, 1996.
29. D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers, John Wiley & Sons, New York, NY, USA, 3rd edition, 2002.
30. J. R. Hass, F. R. Giordano, and M. D. Weir, Thomas’ Calculus, Pearson Addison Wesley, 10th edition, 2004.
31. R. P. Agarwal and M. Bohner, “Basic calculus on time scales and some of its applications,” Results in Mathematics, vol. 35, no. 1-2, pp. 3–22, 1999.

Copyright © 2014 Özgür Yeniay et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.