New algorithms for nonlinear equations

In this paper, three new algorithms are introduced to solve non-linear equations using variational iteration technique, and their convergence criteria is discussed. Moreover, several examples are also given to illustrate that these algorithms are more efficient than Newton’s method, Halley’s method, and Househölder’s method. Besides, one can confirm the efficiency of the introduced fast algorithms in terms of the so-called efficiency index. Such techniques have potential applications in adaptive estimation and control, parameter estimation, and so on.


Introduction
*Many problems in mathematics, physics, and engineering sciences are related to solutions of nonlinear equations. For example, in statistics or adaptive estimation and control, how to estimate unknown parameter is one critical part which can be usually converted to a problem of solving non-linear equations. In practice, more and more unmanned systems such as unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) have witnessed interests of researchers and their dynamics equations are often nonlinear in real world. Up to now, a majority of methods for coping with nonlinear systems or equations are still based on linearization. However, sometimes the considered systems are highly nonlinear. When linearization is used to solve these problems, accumulative error can be very large. If we can find solutions of these nonlinear equations directly instead of linearization, the accuracy of unmanned systems can be improved. Since most of the non-linear equations do not have exact solutions, we are obliged to find their numerical solutions, which are iterative in nature.
In recent years, a large number of iterative methods have been developed using different techniques such as decomposition method, Taylor's series, perturbation method, quadrature formulas, and variational iteration technique (Nazeer et al., 2016a;2016b;Chun, 2006;Burden and Faires, 1997;Stoer and Bulirsch, 2002;Quarteroni et al., 2000;Chen et al., 1993;Householder, 1970;Traub, 1982;Inokuti et al., 1978;He, 1999a;1999b;Noor, 2007;Shah et al., 2016;Thukral, 2015;2016). The most famous method for solving non-linear equations is Newton's method (Nazeer et al., 2016a). To improve its convergence, various modified methods have been developed in literature (Chun, 2006;Gutierrez and Hernandez, 1997;Householder, 1970;Sebah and Gourdon, 2001;Abbasbandy, 2003). Chun (2006) generalized and modified generalized Newton-Raphson methods are suggested, which are free from second derivatives. In Gutierrez and Hernandez (1997), the convergence analysis for the Halley's method is simplified and the best possible error bound under standard Newton-Kantorovich hypotheses has been obtained. In Sebah and Gourdon (2001), global convergence theorems for Halley's and chebyschev's methods have been proved. In Noor and Noor (2007), a new two-step iterative method for solving nonlinear equations has been given, known as the predictor-corrector Halley's method with convergence of order 6 which was considered a significant improvement as compared to previously known methods.
In this paper, we proposed three new algorithms by applying variational iteration technique by considering two auxiliary functions ( ) and ( ). The first one, ( ) acts as a predictor function having convergence order where ≥ 1 and the second function ( ) acts as a corrector function having convergence order with ≥ 1. The predictor function helps to obtain iterative methods of convergence order + . Using variational iteration technique, we develop new iterative methods with higher order of convergence. The variational iteration technique was introduced by Inokuti et al. (1978). Using this technique, Noor andShah (2009) andNoor (2007) derived some iterative methods for solving the non-linear equations. The purpose of this technique was to solve variety of diverse problems (He, 1999a;1999b;. We apply this technique to obtain three higher-order iterative methods. These methods have convergence order twelve, which is remarkable. Finally, several examples are given to illustrate their performance.

Consider a nonlinear equation
with as a simple root and an initial guess, sufficiently close to . For the sake of convenience, we consider an approximate solution such that ( ) ≠ 0. Furthermore, consider ( ) and ( ) as iteration functions of order and , respectively. Then where = is a recurrence relation which generates iterative methods of order + , ( ) is any arbitrary function which will be converted later on to ( ( ), and is a parameter which is usually called the Lagrange's multiplier and can be identified by the optimality condition. Using the optimality criteria, we have From Eq. 2 and Eq. 3, we get Now we apply Eq. 4 for constructing a general iterative scheme: Suppose that which is well known Househölder's method with cubic convergence. With the help of Eq. 4 and Eq. 5, we can write . This is twostep Househölder method with convergence order nine. Differentiating ( ) with respect to we get and from Taylor's series of ′′ ( ), we get Ignoring squared and higher powers of ( − ) yields with the help of Eq. 8 and Eq. 10, we get substituting Eq. 10 in Eq. 6, we obtain Since = 9 and = 3, then = = 9 3 = 3. Then Eq. 13 becomes .
( 14) The relation in Eq. 14 is the main and general iterative scheme, which we use to deduce iterative methods for solving nonlinear equations by considering some special cases of the auxiliary functions .

Convergence analysis
In this section, we discuss the convergence order of the general iteration scheme in Inokuti et al. (1978).

Numerical examples
In this section, comparison of different iterative methods is given. The following examples given in Tables 1-8 confirm that the algorithms I, II, and III are more efficient than the Newton's method (NM) (Nazeer et al., 2016a;2016b), Halley's method (HM) (Chen et al., 1993), Traub's Method (TM) (Traub, 1982), and modified Halley's method (MHM) (Noor, 2007).  The columns represent the number of iterations and the number of functions or derivatives evaluations required to meet the stopping criteria, and the magnitude | ( )| of ( ) at the final estimate . The term "efficiency index" (Nazeer et al., 2016a;2016b) shows us how fast and efficient our method is. It is used to analyze the performance of different iterative methods. The efficiency index of our method is 12 1 5 ≈ 1.6438.

Conclusion
The introduction of three new algorithms aims to several simple yet powerful mathematical tips for improving the efficiency of solving nonlinear equations. Their efficiency index is greater than Newton's method (Nazeer et al., 2016a;2016b), Halley's method (Chen et al., 1993), Traub's Method (Traub, 1982) and Noor's method (Noor, 2007). In our future work, we will apply our methods for more complex examples. We will also present polynomiography of our methods. An adaptive control problem will be solved with a better estimation of parameter.

Authors' contributions
Muhammad Idrees developed the main idea of the study, participated in the sequence alignment and drafted the manuscript. Ma Hongbin refined the idea and supervised the study, participated in its design, coordination, and helped to draft the manuscript. Amir Naseem, Abdur Rauf Nizami, and Sajid Ali and Li Song helped in analysis and software operations. All authors have read and approved the final manuscript.