Matrix triangulation calculators
Matrix triangulation using Gauss and Bareiss methods.
This content is licensed under Creative Commons Attribution/Share-Alike License 3.0 (Unported). That means you may freely redistribute or modify this content under the same license conditions and must attribute the original author by placing a hyperlink from your site to this work https://planetcalc.com/1959/. Also, please do not modify any references to the original work (if any) contained in this content.
Below are two calculators for matrix triangulation.
The first uses the Gauss method, the second the Bareiss method. A description of the methods and their theory is below.
First we will give a notion to a triangular or row echelon matrix:
The matrix has a row echelon form if:
- all zero rows, if any, belong at the bottom of the matrix
- The leading coefficient (the first nonzero number from the left, also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it
- All nonzero rows (rows with at least one nonzero element) are above any rows of all zeroes
Row echelon matrix example:
1 0 2 5
0 3 0 0
0 0 0 4
The notion of a triangular matrix is more narrow and it's used for square matrices only. It goes like this: the triangular matrix is a square matrix where all elements below the main diagonal are zero.
Example of an upper triangular matrix:
1 0 2 5
0 3 1 3
0 0 4 2
0 0 0 3
By the way, the determinant of a triangular matrix is calculated by simply multiplying all its diagonal elements.
You may ask, what's so interesting about these row echelon (and triangular) matrices? Well, they have an amazing property – any rectangular matrix can be reduced to a row echelon matrix with the elementary transformations.
So, what's the elementary transformations, you may ask?
Elementary matrix transformations are the following operations:
- Row switching (a row within the matrix can be switched with another row)
- Row multiplication (each element in a row can be multiplied by a nonzero constant)
- Row addition (a row can be replaced by the sum of that row and a multiple of another row)
What now?
Elementary matrix transformations retain the equivalence of matrices. And, if you remember that the systems of linear algebraic equations are only written in matrix form, it means that the elementary matrix transformations don't change the set of solutions of the linear algebraic equations system, which this matrix represents.
By triangulating the AX=B linear equation matrix to A'X = B' i.e. with the corresponding column B transformation you can do so called "backsubstitution".
To explain we will use the triangular matrix above and rewrite the equation system in a more common form (I've made up column B):
It's clear that first we'll find , then, we substitute it to the previous equation, find and so on – moving from the last equation to the first. That is what is called backsubstitution.
This row-reduction algorithm is referred to as the Gauss method. The Gauss method is a classical method for solving systems of linear equations. It is calso called Gaussian elimination as it is a method of the successive elimination of variables, when with the help of elementary transformations the equation systems are reduced to a row echelon (or triangular) form, in which all other variables are placed (starting from the last).
Now, some thoughts about this method.
How can you zero the variable in the second equation?
By subtracting the first one from it, multiplied by a factor
Here is an example:
Zero in the first equation
There is no in the second equation
In a generalized sense, the Gauss method can be represented as follows:
where N – row dimension,
- i-row,
- element in i row, j column
It seems to be a great method, but there is one thing – its division by occurring in the formula. Firstly, if a diagonal element equals zero, this method won't work. Secondly, during the calculation the deviation will rise and the further, the more. So the result won't be precise.
For the deviation reduction, the Gauss method modifications are used. They are based on the fact that the larger the denominator the lower the deviation. These modifications are the Gauss method with maximum selection in a column and the Gauss method with a maximum choice in the entire matrix. As the name implies, before each stem of variable exclusion the element with maximum value is searched for in a row (entire matrix) and the row permutation is performed, so it will change places with .
However, there is a radical modification of the Gauss method – the Bareiss method.
How can you get rid of the division? By multiplying the row by before subtracting. Then you have to subtract , multiplyied by without any division.
.
It seems good, but there is a problem of an element value increase during the calculations
Bareiss offered to divide the expression above by and showed that where the initial matrix elements are the whole numbers then the resulting number will be whole. It's also assumed that for the zero row .
By the way, the fact that the Bareiss algorithm reduces integral elements of the initial matrix to a triangular matrix with integral elements, i.e. without deviation accumulation, it quite an important feature from the standpoint of machine arithmetic.
The Bareiss algorithm can be represented as:
This algorithm can be upgraded, similarly to Gauss, with maximum selection in a column (entire matrix) and rearrangement of the corresponding rows (rows and columns).
Comments