Here are the two calculators for matrix triangulation below.
The first uses Gauss method, the second - Bareiss method. Methods description and theory below.
So, first we will give a notion to a triangular or row echelon matrix
Matrix has a row echelon form if:
- all zero rows, if any, belong at the bottom of the matrix
- The leading coefficient (the first nonzero number from the left, also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it
- All nonzero rows (rows with at least one nonzero element) are above any rows of all zeroes
Row echelon matrix example:
1 0 2 5
0 3 0 0
0 0 0 4
The notion of triangular matrix is more narrow and it's used for square matrices only. It goes like this: the triangular matrix is a square matrix where all elements below the main diagonal are zero. It's actually called upper triangular matrix, but we will use it. It's obvious that upper triangular matrix is also a row echelon matrix
Example of upper triangular matrix:
1 0 2 5
0 3 1 3
0 0 4 2
0 0 0 3
By the way, the determinant of a triangular matrix is calculated by simply multiplying all it's diagonal elements.
You may ask, what's so interesting about this row echelon (and triangular) matrices, that all other has to be reduced to then?
They have an amazing property - any rectangular matrix can be reduced to row echelon matrix with the elementary transformations.
What's the elementary transformations, you may ask?
Elementary matrix transformations are the following operations:
- Row switching (A row within the matrix can be switched with another row).
- Row multiplication(Each element in a row can be multiplied by a non-zero constant).
- Row addition (A row can be replaced by the sum of that row and a multiple of another row).
Elementary matrix transformations retain equivalence of matrices. And if you remember that the systems of linear algebraic equations are written just in matrix form, it means that the elementary matrix transformations don't change the set of solutions of linear algebraic equations system, which this matrix represents.
By triangulating AX=B linear equation matrix to A'X = B' i.e. with corresponding column B transformation you can do so called "backsubstitution".
To be clear, we will be using triangular matrix above and rewrite the equation system to a more common form ( I've made up column B):
It's clear that first we'll find , then, we substitute it to the previous equation, find and so on - moving from the last equation to the first. That what is called backsubstitution/
This row reduction algorithm is called Gauss method. Gauss method is a classical method for solving systems of linear equations. It's also Gaussian elimination as it's a method of successive elimination of variables, when with the help of elementary transformations the equation systems is reduced to a row echelon (or triangular) form, in which all other variables are placed(starting from the last).
Now some words about this method.
How can you zero the variable in the secon equation?
By subtracting the first one from it, multiplied by a factor
Here is an example:
Zero in the first equation
There is no in the second equation
In generalized sense, Gauss method can be represented as follows:
where N - row dimension,
- element in i row, j column
It seems as a great method, but there is one thing. It's division by occurring in formula. First, if diagonal element equals zero, this method won't work. Second, during the calculation the deviation will rise and the further, the more. So the result won't be precise.
For the deviation reduction, the Gauss method modifications are used. They based on the fact that the larger the denominator the lower the deviation. These modifications are Gauss method with maximum selection in a column and Gauss method with a maximum choice in the entire matrix. As the name implies, before each stem of variable exclusion the element with maximum value is searched for in a row (entire matrix) and row permutation is performed, so it will change places with .
But there is a radical modification of Gauss method - Bareiss method.
How can you get rid of division? By multiplying the row by before subtracting. Then you have to subtract , multiplyied by without any division.
Seems good, but there is a problem of element value increase during the calculations
Bareiss offered to divide the expression above by and showed that whether the initial matrix elements are the whole numbers then the resulting number will be whole. It's also assumed that for the zero row .
By the way, the fact that Bareiss algorithm reduces integral elements of the initial matrix to a triangular matrix with integral elements, i.e. without deviation accumulation, it quite an important feature from the standpoint of machine arithmetic.
Bareiss algorithm can be may be represented as:
This algorithm can be upgraded, similarly to Gauss, with maximum selection in a column (entire matrix) and rearrangement of the corresponding rows (rows and columns).