Simple residual-norm based algorithms, for the solution of a large system of non-linear algebraic equations, which converge faster than the Newton's method, CS Liu, SN Atluri

Tags: Tech Science Press, residual errors, Algorithm, convergence criterion, Atluri, solving, maximum error, a0, nonlinear algebraic equations, Numerical error, Algorithms, invariant manifold, Computer Modeling, non-linear algebraic equations, novel algorithms, nonlinear equations, Euler scheme, Newton algorithm, Residual Error Ratio, convergence, Newton's method, Jacobian matrix, Newton method, numerical methods, algebraic equations, iterative algorithms, Liu, pp, differential equations, increasing function, partial differential equations, the residual errors, local convergence, FTIM, numerical errors, differential equation, the Newton
Content: Copyright © 2011 Tech Science Press
CMES, vol.71, no.3, pp.279-304, 2011
Simple "Residual-Norm" Based Algorithms, for the Solution of a Large System of Non-Linear algebraic equations, which Converge Faster than the Newton's Method Chein-Shan Liu1 and Satya N. Atluri2
Abstract: For solving a system of nonlinear algebraic equations (NAEs) of the type: F(x) = 0, or Fi(x j) = 0, i, j = 1, . . . , n, a Newton-like algorithm has several drawbacks such as local convergence, being sensitive to the initial guess of solution, and the time-penalty involved in finding the inversion of the Jacobian matrix Fi/ x j. Based-on an invariant manifold defined in the space of (x,t) in terms of the residual-norm of the vector F(x), we can derive a gradient-flow system of nonlinear ordinary differential equations (ODEs) governing the evolution of x with a fictitious time-like variable t as an Independent variable. We can prove that in the present novel Residual-Norm Based Algorithms (RNBAs), the residual-error is automatically decreased to zero along the path of x(t). More importantly, we have derived three iterative algoritms which do not involve the fictitious time and its stepsize t. We apply the three RNBAs to several numerical examples, revealing exponential convergences with different slopes and displaying the high efficiencies and accuracies of the present iterative algorithms. All the three presently proposed RNBAs: (i) are easy to implement numerically, (ii) converge much faster than the Newton's method, (iii) do not involve the inversion of the Jacobian Fi/ x j, (iv) are suitable for solving a large system of NAEs, and (v) are purely iterative in nature. Keywords: Nonlinear algebraic equations, Non-Linear Ordinary Differential Equations, Non-Linear Partial Differential Equations, Residual-Norm Based Algorithms (RNBAs), Fictitious time integration method (FTIM), Iterative algorithm 1 Department of Civil Engineering, National Taiwan University, Taipei, Taiwan. E-mail: [email protected] 2 Center for Aerospace Research & Education, University of California, Irvine
280 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011 1 Introduction In many practical nonlinear engineering problems, governed by ordinary or partial differential equations, the methods such as the finite element, boundary element, finite volume discretization, and the meshless method, etc., eventually lead to a system of nonlinear algebraic equations (NAEs). Many numerical methods used in computational mechanics, as demonstrated by Zhu, Zhang and Atluri (1998), Atluri and Zhu (1998a), Atluri (2002), Atluri and Shen (2002), and Atluri, Liu and Han (2006) lead to the solution of a system of linear algebraic equations for a linear problem, and of a NAEs system for a nonlinear problem. Over the past forty years, two important contributions have been made towards the numerical solutions of NAEs. One of these methods has been called the "predictorcorrector" or "pseudo-arclength continuation" method. This method has its historical roots in the embedding and incremental loading methods which have been successfully used for several decades by engineers to improve the convergence properties when an adequate starting value for an iterative method is not available. Another is the so-called simplical or piecewise linear method. The monographs by Allgower and Georg (1990) and Deuflhard (2004) are devoted to the continuation methods for solving NAEs. Liu and Atluri (2008) have pioneered a Fictitious Time Integration Method (FTIM) to solve a large system of NAEs, and Liu and his coworkers showed that high performance can be achieved by using the FTIM [Liu (2008, 2009a, 2009b, 2009c); Liu and Chang (2009)]. The FTIM whilst robust, still suffers from the drawbacks: (i) even though it does not involve the computations of Fi(x j)/ x j and the inversion of the Jacobian matrix Fi(x j)/ x j of Fi(x j) = 0, i, j = 1, . . . , n, FTIM is still slow to converge; (ii) the convergence is only local; (iii) it still involves a time-step t in integrating the ODEs which are used as surrogates in solving the NAEs; (iv) it is not simply iterative in nature. The aims of the present paper are to develop: (1) methods which converge faster than the Newton's method for solving a system of NAEs, simply based on the scalar norm of the residual error in solving F(x) = 0; (2) a method which does not involve the inversion of Fi(x j)/ x j; (3) a method which is globally convergent, and (4) a method which is purely iterative in nature and does not involve actually integrating a system of ODEs, with a time step t in numerical schemes, such as GPS (group preserving scheme) developed by Liu (2001). The remainder of this paper is arranged as follows. Some evolution methods for solving NAEs are briefly sketched in Section 2. In Section 3 we give a theoretical basis of the RNBA. We start from a continuous manifold defined in terms of residual-norm, and arrive at a system of ODEs using a "normality condition". Sec-
Simple "Residual-Norm" Based Algorithms
281
tion 4 is devoted to deriving a scalar equation to keep x on the manifold, and then we propose three strategies to select the weighting factors for the regularized "gradient vector", which automatically have a convergent behavior of the residual-error curve. In Section 5 we give four numerical examples to test the RNBAs with different weighting factors. Finally, the iterative algorithms are summarized in Section 6, and the advantages of the newly developed RNBAs are emphasized.
2 Different evolution methods We consider a system of nonlinear algebraic equations (NAEs) in their vector form:
F(x) = 0.
(1)
In order to eliminate the need for inverting a matrix in the iteration procedure, Ramm (2007) has proposed a lazy-bone method based on the following evolution equation:
x = -F(x),
(2)
which in general leads to a divergence of the solution. Liu and Atluri (2008) have proposed another first-order system of nonlinear ODEs:
x = - F(x), q(t)
(3)
where is a nonzero constant and q(t) may in general be a monotonically increasing function of t. In their approach, the term /q(t) plays a major role of a stabilized controller to help one obtain a solution even for a bad initial guess of solution, and speed up the convergence. Liu and Chang (2009) combined it with a nonstandard group preserving scheme for solving a system of ill-posed linear equations. Ku, Yeih, Liu and Chi (2009) employed a time-like function of q(t) = (1 + t)m, 0 < m 1 in Eq. (3), and better performance was observed. In spite of its success, the FTIM has only a local convergence and needs to determine the viscous damping coefficients for different equations in the same problem. To remedy the shortcoming of the vector homotopy method as initiated by Davidenko (1953), Liu, Yeih, Kuo and Atluri (2009) have proposed a scalar homotopy method with the following evolution equation:
x = -
h t
h,
h x
2 x
(4)
282 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011
where
1 h(x,t) = [t
F(x)
2 - (1 - t)
x
2],
(5)
2
h = 1 [ F(x) 2 + x 2],
(6)
t 2
h = tBTF - (1 - t)x,
(7)
x
in which B is the Jacobian matrix with its i j-component being given by Bi j = Fi/ x j. This method has global convergence; however, the convergence speed is quite slow. Ku, Yeih and Liu (2010) combined this idea with an exponentially decaying scalar homotopy function, and developed a manifold-based exponentially convergent algorithm (MBECA), based on integrating a system of nonlinear ODEs:
x
=
-
(1
+
t
)m
F2 BTF
2
BT
F.
(8)
Two major drawbacks appear in the MBECA: irregular bursts and flattened behavior appear in the trajectory of the residual-error. Before the derivation of our new algorithms, we also mention that Hirsch and Smale (1979) have derived a continuous Newton method governed by the following differential equation:
x(t) = -B-1(x)F(x).
(9)
It can be seen that the ODEs in Eq. (9) are difficult to calculate, because they involve an inverse matrix B-1. Usually it is difficult to derive a closed-form solution of Eq. (9); hence a numerical scheme, such as the Euler method, can be employed to integrate Eq. (9). For the Newton algorithm we can derive
x(t + t) = x(t) - tB-1F,
(10)
F = Bx = -F,
(11)
F(t + t) = F(t) - tF(t),
(12)
F(t + t) F(t)
= 1 - t,
(13)
where t is a time stepsize used in the Euler scheme. The last equation means that the ratio of two consecutive residual errors as given in Eq. (13) is 1 - t for the Newton algorithm. All the above methods require to specify some parameters, such as , m and the time stepsize t used in the numerical integrations.
Simple "Residual-Norm" Based Algorithms
283
In this paper we introduce novel and very simple iterative "Residual-Norm Based Algorithms (RNBAs)", which can be easily implemented to solve NAEs, and thereby a nonlinear system of partial differential equations when suitably discretized. The present RNBA can overcome the two major drawbacks as observed in the MBECA: no irregular bursts and no flattened behavior appear in the trajectory of the residualerror.
3 Theoretical basis­invariant manifold
We define a scalar function h, depending on the "Residual-Norm" in the error of F(x) = 0, and a monotonically increasing function Q(t), where t is a fictitious timelike parameter:
h(x,t) :=
1 Q(t)
F(x)
2,
(14)
2
and define a surface
h(x,t) -C = 0.
(15)
This equation prescribes an invariant manifold in the space of (x,t). By the above implicit function we in fact have required x to be a function of a fictitious time variable t. We do not need to specify the function Q(t) a priori, but 2C/Q(t) merely acts as a measure of the residual error in Eq. (1) in time. Hence, we impose in our algorithm that Q(t) > 0 is a monotonically increasing function of t. We let Q(0) = 1, and C to be determined by the initial condition x(0) = x0 with
1 C= 2
F(x0)
2.
(16)
Usually, C > 0, and C = 0 when the initial value x0 is just exactly the solution of Eq. (1). However, it is rare if this lucky coincidence happens. We expect h(x,t) - C = 0 to be an invariant manifold in the space of (x,t) for a dynamical system h(x(t),t) -C = 0 to be specified further. When C > 0 and Q > 0, the manifold defined by Eq. (15) is continuous, and thus the following differential operation carried out on the manifold makes sense. As a "consistency condition", by taking the time differential of Eq. (15) with respect to t and considering x = x(t), we have
h = 1 Q(t) F(x) 2 + Q(t)(BTF) · x = 0.
(17)
2
284 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011
Eq. (17) cannot uniquely determine the evolution equation for x; however, we assume a "normality condition" that
x
=
-

h x
=
-
Q(t )BT F,
(18)
where is to be determined. Inserting Eq. (18) into Eq. (17), we can solve for :
Q(t) F 2
= 2Q2(t) BTF 2 .
(19)
Thus we obtain an evolution equation for x defined by a "gradient-flow" or "normalityrule":
x = -q(t)
F2 BTF
2
BTF,
(20)
where
Q (t )
q(t) :=
.
(21)
2Q(t)
Hence, in our algorithm if Q(t) can be guaranteed to be an increasing function of t, we may have an absolutely convergent property in solving the NAEs in Eq. (1):
F(x) 2 = 2C .
(22)
Q(t)
When t is large, the above equation will force the residual error F(x) to tend to zero, and meanwhile the solution of Eq. (1) is obtained approximately. Later in this paper, we prove that the ratio of residual errors derived from Eq. (20) is much better than that of the Newton algorithm in Eq. (13).
4 Dynamics of the present iterative algorithms
4.1 Discretizing, yet keeping x on the manifold [h(x,t) -C = 0]
Now we discretize the foregoing continuous time dynamics into discrete time dynamics:
x(t + t) = x(t) - tq(t)
F2 BTF
2
BTF,
(23)
which is obtained from the ODEs in Eq. (20) by applying the Euler scheme.
Simple "Residual-Norm" Based Algorithms
285
In order to keep x on the manifold defined by Eq. (22), we can consider the evolution of F along the path x(t) by:
F = Bx = -q(t)
F2 BTF 2 AF,
(24)
where
A := BBT.
(25)
Suppose that we simply use the Euler scheme to integrate Eq. (24):
F2
F(t + t) = F(t) - tq(t) BTF 2 AF.
(26)
Taking the square-norms of both the sides of Eq. (26) and using Eq. (22), we can obtain
2C Q(t + t)
=
2C Q(t)
- 2t
2Cq(t) Q(t)
F · (AF) BTF 2
+
(t )2
2Cq2 (t ) Q(t)
F2 BTF 4
AF 2.
(27)
Thus we have the following scalar equation:
a(t )2
-
bt
+
1-
Q(t) Q(t + t)
=
0,
(28)
where
a := q2(t)
F 2 AF BTF 4
2 ,
(29)
b := 2q(t).
(30)
As a result h(x,t) - C = 0, t {0, 1, 2, . . .} remains to be an invariant manifold in the space of (x,t) for discrete time dynamical systems h(x(t),t) - C = 0, which will be explored further in the next section.
4.2 Three simple and novel algorithms Now we specify the discrete time dynamics h(x(t),t) = C, t {0, 1, 2, . . .}, through specifying the discrete time dynamics of Q(t), t {0, 1, 2, . . .}. Note that discrete time dynamics is an iterative dynamics, which in turn amounts to an iterative algorithm.
286 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011
Inserting Eqs. (29) and (30) into Eq. (28) we can derive
a0 (qt )2
-
2(qt)
+
1
-
Q(t) Q(t + t)
=
0,
(31)
where
F 2 AF 2
a0 := BTF 4 1,
(32)
because of the condition
BTF 2 = F · (AF) F AF
by using the Cauchy-Schwarz inequality. In our previous experience when Q(t) is fixed to be a given function, such as an exponential function of t, the resultant algorithm has some drawbacks as observed by Ku, Yeih and Liu (2010). Thus, we let Q(t) to be unspecified here. Instead, we let Q(t) to be a quantity automatically derived from the new algorithms. From Eq. (31) we let
s
=
a0 (qt )2
-
2(qt)
+
1
=
Q(t) ; Q(t + t)
(33)
thus s signifies the ratio of Q(t)/Q(t + t). We search for a minimum of s with respect to t by setting to zero of the differential of Eq. (33) with respect to t:
1
t = qa0 .
(34)
Inserting it into Eq. (33) we can derive the minimum of s:
1
s=1- <1
(35)
a0
due to the fact that a0 1 as shown in Eq. (32). The above property is very important. From Eqs. (22) and (33) it follows that
F(t + t)
= s.
(36)
F(t)
Thus, Eq. (35) means that the ratio of two consecutive residual errors is smaller than one:
F(t + t) = F(t)
1 - a-0 1 < 1.
(37)
Simple "Residual-Norm" Based Algorithms
287
Inserting the value of t from Eq. (34) into Eq. (23) we obtain the first algorithm:
1 x(t + t) = x(t) - a0
F2 BTF
2
BTF
=
x(t
)
-
BTF AF
2 2
BTF.
(38)
It is interesting that in the above algorithm no parameters and no t are required. Furthermore, the property in Eq. (37) is very important, since it guarantees the new algorithm to be absolutely convergent to the true solution. Corresponding to the gradient vector BTF, we can understand that BTF 2BTF/ AF 2 is a regularized gradient vector. In front of it, some weighting factor may be placed to speed-up the convergence speed.
The above t in Eq. (34) may be too conservative. Thus we specify a certain constant s = s0 < 1, and from Eq. (33) we have
a0(qt)2 - 2(qt) + 1 - s0 = 0.
(39)
We can take the solution of t to be
1+ t =
1 - (1 - s0)a0 , qa0
if 1 - (1 - s0)a0 0,
(40)
1
t = qa0 , if 1 - (1 - s0)a0 < 0.
(41)
Inserting the above two t's into Eq. (23) we can derive the second algorithm:
x(t + t) = x(t) -
BTF AF
2 2
BTF,
(42)
where
= 1 + 1 - (1 - s0)a0, if 1 - (1 - s0)a0 0,
(43)
= 1, if 1 - (1 - s0)a0 < 0.
(44)
This algorithm includes a given parameter s0 < 1, but does not need t, whose ratio of residual errors is given by
F(t + t) F(t)
= s0, if 1 - (1 - s0)a0 0,
(45)
F(t + t) = F(t)
1 - a-0 1, if 1 - (1 - s0)a0 < 0.
(46)
In Eq. (38) the weighting factor is = 1. In contrast, the weighting factor in Eq. (42) is larger or equal to 1.
288 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011
Instead of the constant s0, we may allow s to be a function of a0. We observe that the following
1
s = 1 - a20
(47)
automatically satisfies 1 - (1 - s)a0 0. Hence by solving Eq. (33) for t with the above s we can derive
1+ t =
1 - a-0 1 , qa0
(48)
and inserting it into Eq. (23) we can obtain the third algorithm:
x(t + t) = x(t) -
BTF AF
2 2
BTF,
(49)
where the weighting factor is given by
= 1 + 1 - a-0 1 > 1.
(50)
This algorithm also does not involve specifying any parameter and time stepsize. The ratio of residual errors of this algorithm is
F(t + t) = F(t)
1 - a-0 2 < 1.
(51)
Below we give some numerical tests of the newly proposed Residual-Norm Based Algorithms (RNBAs), which are respectively labelled in this paper as Algorithm 1, Algorithm 2 and Algorithm 3.
5 Numerical comparisons of three RNBAs Before the comparisons of presently developed three algorithms, we must stress that these algorithms do not need the stepsize. However, in order to compare them with the Newton method we require t to be inserted into Eq. (13) to obtain the ratio of residual errors belong to the Newton scheme. Thus we use Eq. (34) to calculate t for Algorithm 1, Eqs. (40) and (41) to calculate t for Algorithm 2, and Eq. (48) to calculate t for Algorithm 3. Here we fix q(t) = 100/(1 + t) for a reasonable stepsize of t. Now we apply the new methods of RNBAs to some nonlinear algebraic equations derived from PDE, ODE, Brown's problem, and a nonlinear problem with B singular.
Simple "Residual-Norm" Based Algorithms
289
5.1 Example 1 We consider a nonlinear Heat conduction equation:
ut = (x)uxx + (x)ux + u2 + h(x,t),
(52)
(x) = (x - 3)2, h(x,t) = -7(x - 3)2e-t - (x - 3)4e-2t ,
(53)
with a closed-form solution being u(x,t) = (x - 3)2e-t. By applying the new algorithms to solve the above equation in the domain of 0 x 1 and 0 t 1 we fix n1 = n2 = 15, which are numbers of nodal points in a standard finite difference approximation of Eq. (52). Because a0 defined in Eq. (32) is a very important factor of our new algorithms we show it in Fig. 1(a) for Algorithm 1, while the ratio of residual errors is shown in Fig. 1(b), the stepsize is shown in Fig. 1(c), and the residual errors with respect to the number of steps up to 200 are shown in Fig. 1(d). From Fig. 1(b) we can see that the numerical performance of Algorithm 1 is better than the Newton method, because we have a much smaller ratio of residual errors than that of the Newton method, which is calculated from Eq. (13) by inserting the stepsize as shown in Fig. 1(c). The results obtained from Algorithms 2 and 3 are, respectively, shown in Figs. 2 and 3. In Algorithm 2 we set s0 = 0.9. No matter which algorithm is used the performances are better than the Newton method as shown in Figs. 1(b), 2(b) and 3(b). It is interesting to note that the three algorithms lead to three quite different a0 as shown in Figs. 1(a), 2(a) and 3(a). The residual errors for the three new algorithms were compared in Fig. 3(d). Up to 200 steps they give almost the same residual error; however, their convergence behaviors are slightly different.
5.2 Example 2 In this example we apply the new algorithms to solve the following boundary value problem:
u = 3 u2,
(54)
2
u(0) = 4, u(1) = 1.
(55)
The exact solution is
4
u(x) = (1 + x)2 .
(56)
290 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011 (a) 100
a0
10
Ratio of residual errors
1 (b) 1.0 0.9 0.8 0.7 0.6 1E-2 (c) 1E-3 1E-4 1E-5 4000 (d)
Newton method Algorithm 1
Stepsize
Residual Error
2000
0
0
40
80
120
160
200
Number of Steps
Figure 1: For example 1 by the first algorithm showing (a) a0, (b) the comparison ofFtihgeurera1t:ioFsoroefxaremspidleu1albyertrhoerfsirostfaAlglogriothrmithsmhow1i,ngan(ad) ao0f, (tbh)ethNeecwomtopnarmisoenthoofd, (c) stethpesirzaeti,oaonfdre(sdi)durealsiedrruoarls eorfrAorl.gorithm1, and of the Newton method, (c) stepsize, and (d) residual error.
Ratio of
Stepsize
residual errors
a0
Simple "Residual-Norm" Based Algorithms 1000 (a) 100 10 1 (b) 1.00 0.99 0.98 0.97 0.96 0.95 0.94 1E-1 (c) 1E-2 1E-3 1E-4 1E-5 4000 (d)
291 Newton method Algorithm 2
Residual Error
2000
0
0
40
80
120
160
200
Number of Steps
Figure 2: For example 1 by the second algorithm showing (a) a0, (b) the comparison ofFtihgeurera2t:ioFsoroefxarmespildeu1abl yetrhreorssecoofndAallgorriiththmmsh2ow, ianngd(ao) fa0t,h(eb)Ntheewctoomnpamriesothnod, (c) steopfstihzeer,aatinodof(dre)srideusiadl uerarloersrroofrA. lgorithm 2, and of the Newton method, (c) stepsize, and (d) residual error.
292 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011 20 (a) 10
a0
residual errors
Ratio of
0 (b) 1.00 0.95 0.90 0.85 0.80 0.75 0.011 (c) 0.010 0.009 0.008 0.007 0.006 0.005 0.004 0.003 0.002 0.001 4000 (d) 2000
Newton method Algorithm 3 Algorithm 1 Algorithm 2 Algorithm 3
Stepsize
Residual Error
0
0
40
80
120
160
200
Number of Steps
Figure 3: For example 1 by the third algorithm showing (a) a0, (b) the comparison ofFtihgeurera3t:ioFsoroefxaremspidleu1albyertrhoertshiordf aAlglogroitrhimthsmhow3,inagn(da)oaf0, t(hbe) tNheecwotmopnarmiseotnhoofd, (c) stepthseizrea,tiaonodf (rdes)idreusaildeurraolrserorfoAr.lgorithm 3, and of the Newton method, (c) stepsize, and (d) residual errors by three algorithms.
Simple "Residual-Norm" Based Algorithms
293
By introducing a finite difference discretization of u at the grid points we can obtain
Fi
=
1 (x)2 (ui+1
- 2ui
+ ui-1) -
3 2
u2i
,
(57)
u0 = 4, un+1 = 1,
(58)
where x = 1/(n + 1) is the grid length. We fix n = 9. In Fig. 4 we compare a0, ratios of residual errors, and residual errors up to 2000 steps. From Fig. 4(c) the three new algorithms were convergent very fast with exponential decaying by different slopes. Algorithm 1 and Algorithm 2 with s0 = 0.9 almost have the same convergence speed, and are better than Algorithm 3. As shown in Fig. 5 the three new algorithms can give accurate numerical solutions with maximum error smaller than 0.005. It is interesting that a0 defined in Eq. (32) are all tending to some constants as shown in Fig. 4(a), which indicates that there exist "attracting sets" in the state space x for the above three algorithms. A further study will be the behavior of these "attracting sets". Under the above same conditions we also apply the FTIM and scalar homotopy method to this problem, where and time stepsize used for FTIM are respectively 0.2 and 0.01, and the time stepsize used for scalar homotopy method is 0.0001. From Fig. 6 we can observe that Algorithm 1 is faster than the FTIM, and much more faster than the scalar homotopy method.
5.3 Example 3 We consider an almost linear Brown's problem [Brown (1973)]:
j=n
Fi = xi + x j - (n + 1), i = 1, . . . , n - 1,
(59)
j=1
j=n
Fn = x j - 1,
(60)
j=1
with a closed-form solution xi = 1, i = 1, . . . , n. For n = 5, in Fig. 7 we show a0, the ratios of residual errors, and the residual errors up to 308 steps, which with an initial guess xi = 0.5, i = 1, . . . , 5 is convergent under the Convergence Criterion = 10-5 by applying Algorithm 1 to solve the above nonlinear algebraic equations. The accuracy is very good with a maximum error of xi, i = 1, . . . , 5 with 5.38 Ч 10-5. From Fig. 7(c) it can be seen that Algorithm 1 is exponentially convergent, with three different slopes.
294 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011 100 (a) 10
Residual Error Ratio of residual errors a 0
1 (b) 1.0
0.9
0.8
0.7
0.6
0.5
Algorithm 1
Algorithm 2
1E+3 (c) 1E+2 1E+1 1E+0 1E-1 1E-2 1E-3 1E-4 1E-5
Algorithm 3
0
400
800
1200
1600
2000
Number of Steps
Figure 4: For example 2 solved by new algorithms showing (a) a0, (b) the ratio of residual errors, and (c) the residual errors of three algorithms. Figure 4: For example 2 solved by new algorithms showing (a) a0, (b) the ratio of residual errors, and (c) the residual errors of three algorithms.
As demonstarted by Han and Han (2010), Brown (1973) solved this problem by the Newton method, and gave an incorrectly converged solution
(-0.579, -0.579, -0.579, -0.579, 8.90).
For n = 10, 30, Brown (1973) found that the Newton method diverged quite rapidly. Now, we apply our algorithms to this tough problem with n = 30. Under the convergence criterion = 10-5 by applying Algorithm 1 to solve the above nonlinear
Simple "Residual-Norm" Based Algorithms
295
0.005
0.004
Numerical Error
0.003
0.002 0.001
Algorithm 3 Algorithm 2 Algorithm 1
0.000
0.0
0.2
0.4
0.6
0.8
1.0
x
FigFuigruere55:: FFoorreexxamamplpel2e s2olvsoedlvbeydthbryeetnherweealngeowrithamlgso: raitchomsp:ariasocnoomf pthaerison of the nunmumereirciaclaleerrroorrss..
algebraic equations, the accuracy is very good with a maximum error of x30 with 2.09 Ч 10-4, and other errors of xi, i = 1, . . . , 29 are the same 6.987 Ч 10-6. From Fig. 8(a) it can be seen that Algorithm 1 is exponential convergent with several different slopes. By applying Algorithm 2 with a given s0 = 0.5, the accuracy is very good with a maximum error of x30 with 9.79 Ч 10-5, and other errors of xi, i = 1, . . . , 29 are the same 3.21 Ч 10-6. Algorithm 2 is accurate than Algorithm 1, even from Fig. 8(b) it can be seen that Algorithm 2 is exponentially convergent up to 50 steps. We also applied Algorithm 3 to this case with an initial guess xi = 2, i = 1, . . . , 30. This algorithm converges much slower than Algorithms 1 and 2 as shown in Fig. 8(c), and as shown in Fig. 9 the accuracy is also much worse than in Algorithms 1 and 2.
296 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011 1E+3
1E+2 1E+1
Scalar homotopy method
Residual Error
1E+0
1E-1
1E-2
1E-3
1E-4 Algorithm 1 1E-5
FTIM
0
2000
4000
6000
8000
10000
Number of Steps
Figure 6: For example 2 solved by three different algorithms: a comparison of the resFiidguarel 6e:rrFoorrs.example 2 solved by three different algorithms: a comparison of the residual errors.
When n is quite large, the last row of the matrix B at the initial point is almost zero. So the resulting nonlinear equations are very stiff and ill-conditioned. In Fig. 10 we show the residual error and numerical error for an extremely ill-posed case of the Brown's problem with n = 100. By applying Algorithm 2 with a given s0 = 0.5, the accuracy is very good with a maximum error of x100 with 3.02 Ч 10-4, and other errors of xi, i = 1, . . . , 99 are the same as 3 Ч 10-6. Under a convergence criterion = 10-5, Algorithm 2 converges within 223 iterations.
Simple "Residual-Norm" Based Algorithms
297
100 (a)
10
Residual Error Ratio of residual errors a 0
1 (b) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
1E+1 (c) 1E+0 1E-1 1E-2 1E-3 1E-4 1E-5
0
50
100
150
200
250
300
350
Number of Steps
Figure 7: For example 3 with n = 5 solved by Algorithm 1 showing (a) a0, (b) the ratFiioguorfer7e:sFidour aelxaemrrpolres3, awnidth(nc=)5resosilvdeudalbyerArolgr.orithm 1 showing (a) a0, (b) the ratio of residual errors, and (c) residual error.
5.4 Example 4
We consider a singular case of B obtained from the following two nonlinear algebraic equations [Boggs (1971)]:
F1 = x12 - x2 + 1,
(61)
F2 = x1 - cos
2
x2
,
(62)
2x1
-1
B=
.
(63)
1
2
sin
2
x2
298 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011
Residual Error Residual Error Residual Error
(a) 1E+2 1E+1 1E+0 1E-1 1E-2 1E-3 1E-4 1E-5
1E+4 0 1E+3
3
6
9
12
15
18
Number of Steps
1E+2
1E+1
1E+0 (b)
1E-1
1E-2
1E-3
1E-4
1E-5
1E+10 1E+9
0 (c)
10
20
30
40
Number of Steps
50
1E+8
1E+7
1E+6
1E+5
1E+4
1E+3
1E+2
1E+1
1E+0
1E-1
1E-2
1E-3
1E-4
1E-5
0 50 100 150 200 250 300 350 400 450 500 Number of Steps
Figure 8: For example 3 with n = 30 showing the residual errors for (a) Algorithm 1, F(big)uArel8g:oFroitrhemxam2,palen3d w(3it)hAn=lg3o0rsihthowmin3g. the residual errors for (a) Algorithm 1, (b) Algorithm 2, and (c) Algorithm 3.
Obviously, on the curve of x1 sin(x2/2) + 1 = 0, B is singular, i.e., det(B) = 0. They have a closed-form solution (0, 1). As demonstrated by Boggs (1971), the Newton method doesnot converge to (0, 1), but rather it crosses the singular curve and converges to (- 2/2, 3/2). We apply Algorithm 1 to solve this problem within 126 iterations, and the results are shown in Fig. 11 for a0, the ratio of residual errors, and the residual error by the solid
Simple "Residual-Norm" Based Algorithms
299
1E-2
Numerical error of xk
1E-3
1E-4 Algorithm 3
1E-5
Algorithm 1
Algorithm 2
1E-6
0
10
20
30
k
Figure 9: For example 3 with n = 30 solved by three new algorithms: a comparison of the numerical errors.
Figure 9: For example 3 with n=30 solved by three new algorithms: a comparison of the numerical errors.
lines. In the termination of iterative process we found that the accuracy of x1 is 1.77 Ч 10-8, and of x2 is 9.50 Ч 10-9. When we apply Algorithm 3 to solve this problem with 144 iterations, satisfying the convergence criterion = 10-8, the re- sults are shown in Fig. 11 for a0, the ratio of residual errors, and the residual error by the dashed lines. The accuracy of x1 is 1.3 Ч 10-8, and of x2 is 9.54 Ч 10-9. Algorithm 3 is slightly more accurate than Algorithm 1. From Fig. 11(b) it can be seen that even in the terminated step the ratios are still within 0.9, which show that the two Algorithms 1 and 3 can further get even more accurate solutions, if we let them run more steps. The accuracy and efficiency obtained in the present algorithms are much better than those obtained by Boggs (1971), and Han and Han (2010).
Numerical error of xk Residual Error
300 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011
1E+14 (a) 1E+13 1E+12 1E+11 1E+10 1E+9 1E+8 1E+7 1E+6 1E+5 1E+4 1E+3 1E+2 1E+1 1E+0 1E-1 1E-2 1E-3 1E-4 1E-5
1E-3
0 (b)
50
100
150
200
Number of Steps
250
1E-4
1E-5
1E-6 0 10 20 30 40 50 60 70 80 90 100 k Figure 10: For example 3 with n = 100 solved by Algorithm 2 showing (a) residual erroFri,gaunred1(0b:)Fnourmexearmicpallee3rrwoirt.h n=100 solved by Algorithm 2 showing (a) residual error, and (b) numerical error.
6 Conclusions Three "Residual-Norm Based Algorithms" (RNBAs) were established in this paper. Although we were starting from a continuous invariant manifold based on the simple residual-norm and specifying a "gradient-flow" ODEs to govern the evolution of unknown variables, we were able to derive the final novel algorithms of iterative type without resorting on the fictitious time steps. In summary, the three novel algorithms could be written concisely as:
xk+1 = xk -
BTk Fk AkFk
2 2 BTk Fk,
(64)
Simple "Residual-Norm" Based Algorithms
301
5 (a) 4
a0
3
2
Residual Error Ratio of residual errors
1 (b) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
Algorithm 1 Algorithm 3
1E+1 (c) 1E+0 1E-1 1E-2 1E-3 1E-4 1E-5 1E-6 1E-7 1E-8 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 Number of Steps Figure 11: For example 4 solved by Algorithms 1 and 3 showing (a) a0, (b) ratios ofFriegsuirdeu1a1l:eFroror resx,aamnpdle(4c)sorelvseidubayl Aerlgrorsit.hms 1 and 3 showing (a) a0, (b) ratios of residual errors, and (c) residual errors.
in which
Algorithm 1: = 1,
(65)
Algorithm 2: =
1 + 1 - (1 - s0)ak 1
if 1 - (1 - s0)ak 0, if 1 - (1 - s0)ak < 0,
(66)
Algorithm 3: = 1 + 1 - a-k 1,
(67)
302 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011
where
ak =
Fk 2 AkFk BTk Fk 2
2 .
(68)
Algorithms 1 and 3 possess a major advantage that they do not need any parameter in the formulations; however, a suitable choice of s0 < 1 in Algorithm 2 can sometimes speed-up the convergence of iterations. We have proved that all the three novel algorithms are convergent automatically, and all are much better than that of the Newton method. Several numerical examples of nonlinear PDE, nonlinear ODE, nonlinear Brown problem with large dimension, and a singular nonlinear equations system, were tested to show the efficiency and accuracy of RNBAs. Indeed, in most situations we observed exponential convergences with different slopes in the iteration process. The RNBAs are easy to implement numerically, do not involve the inversions of the Jacobian matrices, and they can solve a large system of nonlinear algebraic equations very rapidly.
Acknowledgement: Taiwan's National Science Council project NSC-99-2221E-002-074-MY3 granted to the first author is highly appreciated. The second author's research was supported by the World Class University (WCU) program through the National Research Foundation of Korea, funded by the ministry of education, Science & Technology (Grant no: R33-10049), during his visit to Pusan National University. References Allgower, E. L.; Georg, K. (1990): Numerical Continuation Methods: an Introduction. Springer, New York. Atluri, S. N. (2002): Methods of Computer Modeling in Engineering and Sciences. Tech. Science Press, 1400 pages. Atluri, S. N.; Liu, H. T.; Han, Z. D. (2006): Meshless local Petrov-Galerkin (MLPG) mixed collocation method for elasticity problems. CMES: Computer Modeling in Engineering & Sciences, vol. 14, pp. 141-152. Atluri, S. N.; Shen, S. (2002): The meshless local Petrov-Galerkin (MLPG) method: a simple & less-costly alternative to the finite and boundary element methods. CMES: Computer Modeling in Engineering & Sciences, vol. 3, pp. 11-51. Atluri, S. N.; Zhu, T. L. (1998a): A new meshless local Petrov-Galerkin (MLPG) approach in computational mechanics. Comp. Mech., vol. 22, pp. 117-127.
Simple "Residual-Norm" Based Algorithms
303
Atluri, S. N.; Zhu, T. L. (1998b): A new meshless local Petrov-Galerkin (MLPG) approach to nonlinear problems in computer modeling and simulation. Comp. Model. Simul. Eng., vol. 3, pp. 187-196. Boggs, P. T. (1971): The solution of nonlinear systems of equations by A-stable integration technique. SIAM J. Numer. Anal., vol. 8, pp. 767-785. Brown, K. M. (1973): Computer oriented algorithms for solving systems of simultaneous nonlinear algebraic equations. In Numerical Solution of Systems of Nonlinear Algebraic Equations, Byrne, G. D. and Hall C. A. Eds., pp. 281-348, Academic Press, New York. Davidenko, D. (1953): On a new method of numerically integrating a system of nonlinear equations. Doklady Akad. Nauk SSSR, vol. 88, pp. 601-604. Deuflhard, P. (2004): Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms. Springer, New York. Han, T.; Han Y. (2010): Solving large scale nonlinear equations by a new ODE numerical integration method. Appl. Math., vol. 1, pp. 222-229. Hirsch, M.; Smale, S. (1979): On algorithms for solving f (x) = 0. Commun. Pure Appl. Math., vol. 32, pp. 281-312. Ku, C. Y.; Yeih, W.; Liu, C.-S. (2010): Solving non-linear algebraic equations by a scalar Newton-homotopy continuation method. Int. J. Non-Linear Sci. Num. Simul., Vol. 11, pp. 435-450. Liu, C.-S. (2001): Cone of nonlinear dynamical system and group preserving schemes. Int. J. Non-Linear Mech., vol. 36, pp. 1047-1068. Liu, C.-S. (2008): A time-marching algorithm for solving non-linear obstacle problems with the aid of an NCP-function. CMC: Computers, Materials & Continua, vol. 8, pp. 53-65. Liu, C.-S. (2009a): A fictitious time integration method for a quasilinear elliptic boundary value problem, defined in an arbitrary plane domain. CMC: Computers, Materials & Continua, vol. 11, pp. 15-32. Liu, C.-S. (2009b): A fictitious time integration method for the Burgers equation. CMC: Computers, Materials & Continua, vol. 9, pp. 229-252. Liu, C.-S. (2009c): A fictitious time integration method for solving delay ordinary differential equations. CMC: Computers, Materials & Continua, vol. 10, pp. 97116. Liu, C.-S.; Atluri, S. N. (2008): A novel time integration method for solving a large system of non-linear algebraic equations. CMES: Computer Modeling in Engineering & Sciences, vol. 31, pp. 71-83.
304 Copyright © 2011 Tech Science Press CMES, vol.71, no.3, pp.279-304, 2011 Liu, C.-S.; Chang, C. W. (2009): Novel methods for solving severely ill-posed linear equations system. J. Marine Sciences & Tech., vol. 17, pp. 216-227. Liu, C.-S.; Yeih, W.; Kuo, C. L.; Atluri, S. N. (2009): A scalar homotopy method for solving an over/under-determined system of non-linear algebraic equations. CMES: Computer Modeling in Engineering & Sciences, vol. 53, pp. 47-71. Ramm, A. G. (2007): Dynamical System Methods for Solving Operator Equations. Mathematics in Science and Engineering; Vol. 208 (series editor: Chu, C.K.), Elsevier, Amsterdam, Netherlands. Zhu, T.; Zhang, J.; Atluri, S. N. (1998): A meshless local boundary integral equation (LBIE) method for solving nonlinear problems. Comp. Mech., vol. 22, pp. 174-186. Zhu, T.; Zhang, J.; Atluri, S. N. (1999): A meshless numerical method based on the local boundary integral equation (LBIE) to solve linear and non-linear Boundary Value Problems. Eng. Anal. Bound. Elem., vol. 23, pp. 375-389.

CS Liu, SN Atluri

File: simple-residual-norm-based-algorithms-for-the-solution-of-a-large.pdf
Author: CS Liu, SN Atluri
Published: Sun Apr 24 21:05:49 2011
Pages: 26
File size: 0.41 Mb


The classroom flip, 28 pages, 0.43 Mb

My own life, 4 pages, 0.1 Mb
Copyright © 2018 doc.uments.com