“Gaussian elimination” is the standard method for solving systems of linear equations that runs by choosing one pivot variable in one of the equations and eliminating it from the other equations by a suitable linear combination. These other equations have one less variable and one may iterate the process, leading to a system that has a triangular form and can be solved in a relatively straightforward way. In the so-called Gauss-Jordan method, the pivot variables are eliminated from all of the equations, and this leads to the row reduced echelon form of the initial system, a new system in which the pivot variables are explicitly solved in terms of the remaining variables; it also has the merit of being independent of the intermediate steps, but this would be a story for another night.
How the name of Gauss is attributed to this method is also another story, that is recounted in great detail by J. Grcar. As he explains, systems of linear equations and their solution already appear on Babylonian tablets (2000 BC), on the Egyptian Rhind papyrus (1550 BC), in the Chinese Nine chapters on the mathematical art (200 AD), the Arithmetica of the Greek mathematician Diophantus (250 AD), the Āryabhaṭīya of the Hindu mathematician Āryabhaṭa (499 AD). Such systems were essentially absent of the mathematics treatises during the Renaissance and it is to Newton (17th century) that we owe its revival in Western Europe. At the beginning of the 19th century, in relation with the problem in celestial mechanics/geodesy of fitting together multiple imprecise measurements, Gauss and Legendre invented the least square methods. This involved a system of linear equations which Gauss solved by what he called “common elimination”. In the 20th century, the development of computing machines and numerical analysis led to further work, from Cholesky (in relation with geodesy), Crout, to von Neumann and Goldstine and their $LDU$ decomposition.Whoever had to perform elimination by hand knows that the computations are rapidly tedious and often lead to more and more complicated fractions.
When computer calculations are done with floating point algebra, the difficulty of rounding errors appears. If in a linear system, say $Ax=b$, the matrices $A$ and $b$ are only known up to an error, so that the system that is actually solved would rather be $(A+E)x=b+\delta b$, and it is of an obvious importance to compare its solution with the solution of the initial system. One way to make this comparison involves the inverse of $A$, which is unknown at this stage. The product of the norms $\|A\| \,\|A^{-1}\|$ is the conditioning number of $A$, and one can not avoid the problem of ill-conditioned matrices, which will inherently lead to lack of precision.
But when floating points numbers are used in the computation, a new problem appears, even when one restricts to well-conditioned systems. Floating points numbers are of the form $\pm a\cdot 10^e$ (in base 10, say), where $a$ is a real number between $1$ and $10$ which is known up to a fixed number of decimal places. In other words, floating points numbers are known up to a relative error. Let us examine the consequence for the subtraction of floating point numbers. Consider two such numbers, say $x=a\cdot 10^e$ and $x'=a'\cdot 10^e$, with the same exponent $e$, and such that $a$ and $a'$ are close. Their difference is given by $x'-x=(a'-a)\cdot 10^e$, but since $a$ and $a'$ are close, their difference $y=x'-x$ is no more between $1$ and $10$, but may be, say of size $10^{-5}$, so that the floating point expression of $x'-x$ is $(10^5(a'-a))\cdot 10^{e-5}=b\cdot 10^{e-5}$; the problem is that the last 5 decimals of $b$ are absolutely unknown, which leads to a relative error for $y$ which is $10^5$ as big as expected!
Now, by its very essence, the elimination method features a lot of such subtractions, hence it is inherently not very well suited with floating points numbers.
In the 1960s, the American mathematician Wilkinson analysed the situation very precisely. He showed that for the Gaussian elimination, the main parameter is the relative size of the matrices that intervene in the process. To set up some notation, imagine that the initial system $Ax=b$ is transformed, step by step, into a series of equivalent systems $A^{(r)}x=b^{(r)}$, where $A^{(r)}=(a_{ij}^{(r)})$ is a matrix whose first $r$ lines are in triangular form, and the remaining $n-r$ lines still need to be worked on. To get the matrix $A^{(r+1)}$, one subtract a multiple $m_{ir}$ of the $r$th row from the $i$th row he multipliers, for $i$ ranging from $r+1$ to $n$, where the multipliers $m_{ir}$ are defined by
\[ m_{ir}=a_{ir}^{(r)}/a_{rr}^{(r)}. \]
Large multipliers lead to lack of precision, but if complete pivoting method is used, one has $|m_{ir}|\leq 1$. In this case, one observes that a bound $|a_{ij}^{(r)}|\leq M$ for the coefficients of $A^{(r)}$ leads to the bound $|a_{ij}^{(r+1)}|\leq 2M$ at the next step. At the last step, one gets $|a_{ij}^{(n)}|\leq 2^{n-1}M$, where $M=\sup(|a_{ij}|)$. Consequently, the relevant constant to be estimated,
\[ R(A) = \sup_r \frac{\sup_{i,j}|a_{ij}^{(r)}|}{\sup_{i,j}|a_{ij}|}, \]
satisfies $R(A)\leq 2^{n-1}$.
In fact, Wilkinson (1961) gave a much better bound. Let $B^{(r)}$ be the square matrix of size $n-r$ that has to be worked on after the $r$th step and let $b_{r}$ be the maximum size of its coefficients, the size of the chosen pivot since one does complete pivoting. One has the following fomula for its determinant:
\[ \det(B^{(r)})= b_{r+1}\cdots b_n. \]
Moreover, the Euclidean norms of the the columns of $B^{(r)}$ are bounded above by $\sqrt{n-r} b_{r+1}$ and Hadamard inequality (“the volume of a parallelepiped is smaller than the product of the sizes of its edges”) implies that
\[ | \det(B^{(r)})| \leq (n-r)^{(n-r)/2} b_{r+1}^{n-r}. \]
Together, these relations lead to an upper bound
\[ R(A) \leq n^{1/2} \left( 2\cdot 3^{1/2}\cdot 4^{1/3}\cdots n^{1/(n-1)}\right)^{1/2}, \]
roughly $n^{\log(n)/4}$, a bound that Wilkinson considers to be a “severe overestimate”, and in Wilkinson (Rounding Errors in Algebraic Processes, Dover, 1963, bottom of p. 97), he even notes that “No matrix has yet been discovered for which $R(A)>n$.”
This statement remained known as Wilkinson's conjecture, although Wilkinson himself did not state it as such. Tornheim (1964) proved that Hadamard matrices — matrices with entries $\pm1$ and pairwise orthogonal columns and rows — satisfy $R(A)\geq n$ and this led Cryer (1968) to formally state the conjecture and to suspect that $R(A)<n$ unless $A$ is a Hadamard matrix. Some matrices have been shown such that $R(A)\geq (n+1)/2$ (Higham & Higham, 1989) and random matrices seems to have an $R$-factor roughly $n^{1/2}$ (Trefethen & Schreiber, 1990). In fact, Hadamard matrices of size $n=12$ (Edelman & Mascarenhas, 1995) or $n=16$ (Kravvaritis & Mtrouli, 2009) satisfy $R(A)=n$, but this is definitely nontrivial.
However, Gould (1991) could exhibit a matrix $A$ of size $n=13$ with growth factor $R(A)=13.0205$, thus providing a counterexample to Wilkinson's “conjecture”. To that aim, he reduced the question to finding a solution of a nonlinear programming problem with roughly $n^3/3\approx 700$ variables, fortunately a sparse one, a computation he did using a programming package he had developed with Conn and Toint. The matrix he gives has integer coefficients of size up to 20 digits!
But the story does not end here!
Edelman tried to replicate Gould's computations using the computer algebra softwares Mathematica and Maple — what he found is the growth factor of Gould's matrix is around $7.355$, consistently in both softwares! As he writes, one of these softwares could have had a bug, but it is rather unlikely that both of them had had the same one.
What happened, and you will agree that there is much irony in this story, is that Gould had performed his computations using floating point algebra. A near tie in the 6th pivot lead to an incorrect choice of pivot and to an erroneous computation, even within the double precision that he has used.
Fortunately, Edelman (1992) showed that changing one coefficient by $10^{-7}$ in Gould's matrix yields a growth of $13.02$, so that Wilkinson's “conjecture” is definitely incorrect.
No comments :
Post a Comment