THE COMPARISON OF GAUSSIAN ELIMINATION AND CHOLESKY DECOMPOSITION METHODS TO LINEAR SYSTEM OF EQUATIONS
This project work is concerned with study of the comparison of Gaussian elimination and cholesky decomposition methods to linear system of equations.
In chapter one, we are concerned with linear systems and the various methods of solving them. In chapter two and chapter three, we dealt with the indept study of Gaussian Elimination method and the cholesky Decomposition method, with their good points and bad points respectively. While chapter four deals with various application area of both methods, their comparison and finally conclusion..
TABLE OF CONTENT
CHAPTER ONE: LINEAR SYSTEM OF EQUATIONS
SOLVING LINEAR SYSTEM
METHODS OF SOLVING LINEAR SYSTEM
CHAPTER TWO: GAUSSIAN ELIMINATION METHOD
2.2 DERIVATION OF GAUSSIAN ELIMINATION METHOD
2.3 CHECKING PROCEDURES IN GAUSSIAN ELIMINATION
2.4 ADVANTAGES OF GAUSSIAN ELIMINATION METHOD
2.5 DISADVANTAGES OF GAUSSIAN ELIMINATION METHOD.
CHAPTER THREE: CHOLESKY DECOMPOSITION METHOD
3.2 DERIVATION OF CHOLESKY DECOMPOSITION METHOD ON A CASE BY CASE BASE
3.3 ADVANTAGES OF CHOLESKY DECOMPOSITION METHOD
3.4 DISADVANTAGES OF CHOLESKY DECOMPOSITION METHOD.
4.2 APPLICATION OF LINEAR SYSTEMS
4.3 COMPARISON BETWEEN GAUSSIAN ELIMINATION AND CHOLESKY DECOMPOSITION METHODS
LINEAR SYSTEM OF EQUATIONS
There have been series of method used in solving systems of linear equations. A system of equation is a set or collection of equations solved together (Noreen Jamil, 2012). Collection of linear equations is termed as system of linear equations. They are often based on same set of variables. Various methods have been evolved to solve the linear equations but there is no best method yet proposed for solving system of linear equations.
Among other method of solving linear system of equations, the Guassian method and the choleskey composition methods will be applied discussed in details.
A wide variety of problems lead ultimately to the need to solve a linear system of equation linear system of equations are associated with many problems in engineering and science as well as with applications of mathematics to the social sciences and the quantitative study of business and economic problems.
In 1985, according to Atkinson, system of Simultaneous linear equation occur in solving problems in a wide variety of areas with respect to mathematics, statistics, physical quantities (examples are temperature, voltage, population management and displacement). Social sciences, engineering and business. They arise directly in solving real life problems.
The world sometimes reveals itself to us as observable relationships among the relevant variables what it does make evident are relationship that describe how both the variable and their rate of change affect each other.
Apparently, such life changing problem gives rise to systems of simultaneous linear equation. In almost every human activities, man seems to be compelled to uncover fundamental relationship that exist among the objects he observes. According to Maron in 1982, he said in order to make the relationship that exist between variables explicit, we frequently attempt to make a mathematical model that will accurately reflect real life situation. Many mathematical models that will accurately reflect real life situation. Many mathematical models have the same basic structure although disparity in Symbolic rotation may be utilized, which can arise from economics, transportation, which need may arise to make efficient allocation among several points or to solve the growth of population in which units of x1, x2 …., xn arises from net flow from one point to another or in relationship to population growth, that is, number of individuals in a particular age group at a particular time.
There are various methods in solving linear system of simultaneous equations. In numerical analysis the techniques and methods for solving system of linear equations belongs to two categories: Direct and Iterative methods. The direct methods obtain the exact solution (in real arithmetic) in finitely many operations where as iterative method generate a sequence of approximations that only converge in the limit to the solution. The direct method falls into two categories or clam that is the Gaussian elimination method and cholesky decomposition method. Some others are matrix inverse method and LU factorization method and the Cramer’s rule method.
The elimination approach reduces the given system of equations to a form from which the solution can be obtained by simple substitution since calculators and computers have some limit to the number of digits for their use this may lead to round-off errors and produces poorer results. Generally, the direct method are best for full or bounded matrices where as iterative methods are best for very large and sparse matrices. The iterative method provide an alternative to the direct methods for solving systems of linear equations. This method involves assumption of some initial values which are then refined repeatedly till they reach some accepter rang of accuracy. The Jacobi and Gawn-siedel methods are good examples of the iterative method.
Systems of linear equations may be grouped as follows
System of linear equations
Infinite no solution
The system of linear equations are divided into consistent and inconsistent and inconsistent.
The inconsistent equation is an equation that has numbers that are not all zero that is the equation has no solution.
X + 2y – 3z = 4
Y + 2z = 5
The consistent equation is an equation that has numbers that are all zero, that is, the equation has a solution. There are two cases
r = n, that is, there are so many non-zero equations as unknowns. Then we can successfully solve uniquely for the unknowns and there exist a unique solution for the system.
r & n, m that is, there are more unknowns than there are non-zero equations. Thus, we arbitrarily assign values to the unknowns and then solve uniquely for the unknowns. Accordingly, there exist an infinite number or solutions. For example
Since there is no equation of the form 0 = c with c0 the system is consistent. Furthermore since there are three unknowns and three non zero equations the system has a unique solution.
Also, the system
X + 2y – 3z + w = 4
y + 2z +3w = 5
is consistent and since there are more unknowns than non-zero equations, the system has an infinite number of solution. On the other hand, a linear system can be written in matrice form as follows
A x = b.
A linear equation X1, X2, …, Xn is obtained by requiring the linear combination to assume a prescribed value b, that is
Such equation arise frequently generally as n x n linear system is
However it will be necessary to introduce the basic fundamental principles of matrices in linear systems because matrices makes it possible to describe linear systems in a simple way that makes solving n x n linear systems seem like solving ordinary system of equations as follows:-
Besides, matrices can be used to show how to generalize the Newton Raphson method to solve non-linear n x n systems.
Solving Linear SystemsThe linear system in explicit form
For X1, X2, X3,…, Xn are unknowns,
Given aij for each ij = 1, 2, …, n and bi for each i = 1,2,…,n are scalars
According to Maron and Atkinson in 1982 and 1985 respectively, says, that the definition of matrix multiplication makes it possible to write this linear system compactly as a single matrix equation.
AX = b ……………………(3)
Where A is a given n x n matrix assumed to be non-singular, b is a given column vector and X is the solution vector to be determined that is ….. (4)
If A is a non-singular, then the numerical vector A-1b satisfies equation (2) above because
However, any numerical vector X for AX=b must by A-1 satisfy vector X = A-1b
Also, if A is non-singular, then any b, the system AX=b has a unique solution given by X = A-1b
But X = A-1b makes it easy to solve the system AX=b where A-1 is known.
However, the easiest way to solve a 2 x 2 linear system is to use
It is invertible if the only if ad – bc such that
The above equation can be called formula for A-1 when n= 2
It can easily be seen or verified that if n > 2 and all that is required is the solution of a single n x n system AX = b, then finding A-1 and then multiplying b is generally not the most efficient way to obtain the solution. However, we shall consistently denote the solution of AX = b as A-1b even if is not actually obtained as the product A-1b.
METHODS OF SOLVING LINEAR SYSTEM OF EQUATION CRAMER’S RULE
We consider the linear system (3) Supposed that A is non-singular, the equation (3) can be re-written as X = A-1b
If det A 0, then the unique solution of AX = b is
And Aj is the matrix obtained by replacing the jth column of A by b.
Finding X by Cramer’s rule requires evaluating the determinant of A and of n additional n x n matrices A1, A2, …, An. The arrow rules makes crammer’s rule convenient when n = 2 and reasonably easy to use when n = 3. However, for n the efficient evaluation of det A alone is det A =
(-1)p (product of the pivots of L/U) where P is the number of row interchanges used to get L/U is even or odd.
Be a linear system of n equations in n unknown and let A = [aij] be the coefficient matrix so that we can write the given system as AX = b where
If , the system has a unique solution
Where A1 is the matrix obtained from A by replacing the ith column of A by b. If /A/0, then A is non-singular, hence
This means that
If we evaluate /Ai/ by expanding about the ith column, we find that /Ai/ = A1i b1 + a2ib2 + … + Anibn
For i = 1, 2, …, n. In the expression for Xi, the determinant /Ai/ of A1 can be calculated by another method.
THE TRIANGULAR DECOMPOSITION METHOD
The method of triangular decomposition is set out to decompose a square matrix into a product of lower triangular matrix and the upper triangular matrix where A =
But for this project, we shall be discussing the method used by cholesky. There are two ways:- That which have unit lower triangular matrix and that which has unit upper triangular matrix .
In the case of a 4 x 4 matrix we have
Multiplying the rows of L by the first column of U we get L11=a11, L21 =a21, L31= a31, L41 = a41 the first column of L is the same as the first of a. we now multiply the first row of L by the columns of U.
Thus the first row of U is determined. In this method, we alternate between getting a column of L and a row of U, so we get column of L equation for the second column of L by multiplying the rows of L
Proceeding in the same fashion, the equation we need are
The general formula for getting elements of L and U corresponding to the coefficient matrix for n simultaneously can be written as
For j = 1, the rule for L reduces to Lij = aij
For i =1, the rule for U reduces to
In iterative method, we start from an approximation to the true solution and if successful, obtain better approximations from a computational cycle repeated as often as may be necessary for achieving a required accuracy so that the amount of arithmetic depends upon the accuracy required. Iterative methods are used mainly in those problems for which convergence is known to be rapid and for systems of large order but with many zero coefficients.
Generally, these methods have more modest storage requirements than direct methods and may also be faster depending on the iterative method and the problem. They usually also have better vectorisation and parallelisation properties.
The Jacobi and Gauss-siedel methods are examples of the iterative methods.
JACOBI ITERATIVE METHOD
The Jacobi iteration is also known as the method of simultaneous displacements.
By solving the first equation X1 in terms of the remaining unknowns, solving the second equation for X2 in terms of the remaining unknowns, solving the third equation for X3 in terms of the remaining unknowns e.t.c.
The system 20X1 + X2 – X3 = 17
X1 – 10X2 + X3 = 13
-X1 + X2 + 10X3 = 18
Would be re-written as
To solve the system by Jacobi iteration, make an initial approximation to the solution. When no better choice is available use X1 (0), i=1(1)n substitute the initial approximation into the right hand side of equation (2) above and use the values of X1, X2 …, Xn that result on the left hand side as a new approximation to the new solution. Thus, as new values are generated they are not immediately used for rather are retained for the next iteration.
To solve system (3) by Jacobi method we would substitute the initial approximation X1=0, X2= 0, X3=0 into right hand side of system (4) and calculate the new approximation.
To improve the approximation we would repeat the subsequent process.
GAUSS-SIEDEL ITERATION METHOD
History had it that this iteration was used by Gauss in his calculations and was independently discovered by Siedel. The striking factor about this method is that whenever each component of X(r+1) is computed, for this reason Gauss-Siedel method is sometimes called the method of successive displacements recalls the system of linear equations
Gauss-Siedel decided to write this equation
Provided the diagonal elements
Direct methods are those which in the absence of round-off or other errors, will yield the exact solution in a finite number of elementary arithmetic operations. In practise, because a computer works with a finite word length, direct methods do not lead to exact solutions the methods used for exact solutions are Gaussians elimination method and Cholesky decomposition method. We shall discuss these methods in chapter two and three of this project work respectively.
REPEATED DIRECT METHOD
If the system of equation is ill-conditioned, that is if the determinant of matrix coefficients is infinitesimal in comparison with the size of the element of the matrix, we use the repeated direct method, that is we interpreted the equation of the system as hyper-planes in an n-dimensional co-ordinate system, then the elements (a11, a12, …, a1n) of row represent the direction numbers of the “normal” to the with hyper-plan. In this context, we can interpreted an ill-conditional system of linear equation as a set n hyper-planes whose “normal” are very nearly “parallel”. If the systems of equation are thus ill-conditional, then the solution obtained from direct method may be a poor representation of the true solution because of the accumulating effect of round-off error. This round-off error can be limited to some degree by using pivotal condensation.