Gaussian elimination

The calculator solves the systems of linear equations using the row reduction (Gaussian elimination) algorithm. The calculator produces step by step solution description.

This page exists due to the efforts of the following people:

Anton

Michele

Michele

Created: 2014-05-11 04:56:14, Last updated: 2020-11-28 15:55:48
Creative Commons Attribution/Share-Alike License 3.0 (Unported)

This content is licensed under Creative Commons Attribution/Share-Alike License 3.0 (Unported). That means you may freely redistribute or modify this content under the same license conditions and must attribute the original author by placing a hyperlink from your site to this work https://planetcalc.com/3571/. Also, please do not modify any references to the original work (if any) contained in this content.

The systems of linear equations:
\begin{cases}a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n = b_1\\ a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n = b_2\\ \dots \\ a_{m1}x_1 + a_{m2}x_2 + \dots + a_{mn}x_n = b_m\\ \end{cases}
can be solved using Gaussian elimination with the aid of the calculator.

In Gaussian elimination, the linear equation system is represented as an augmented matrix, i.e. the matrix containing the equation coefficients a_{ij} and constant terms b_i with dimensions [n:n+1]:
\begin{array}{|cccc|c|}  a_{11} &  a_{12} &  ... &  a_{1n} &  b_1\\  a_{21} &  a_{22} &  ... &  a_{2n} &  b_2\\  ... &  ... &  ... &  ... &  ...\\  a_{n1} &  a_{n2} &  ... &  a_{nn} &  b_n\\ \end{array}

PLANETCALC, Gaussian elimination

Gaussian elimination

Digits after the decimal point: 2
Number of solutions
 
Solution vector
 
The file is very large. Browser slowdown may occur during loading and creation.

Gaussian elimination

The method is named after Carl Friedrich Gauss, the genius German mathematician from 19 century. Gauss himself did not invent the method. The row reduction method was known to ancient Chinese mathematicians; it was described in The Nine Chapters on the Mathematical Art, a Chinese mathematics book published in the II century.

Forward elimination

The first step of Gaussian elimination is row echelon form matrix obtaining. The lower left part of this matrix contains only zeros, and all of the zero rows are below the non-zero rows:
\begin{array}{|cccc|c|}  a_{11} &  a_{12} &  ... &  a_{1n} &  \beta_1\\  0 &  a_{22}  &  ... &  a_{2n} &  \beta_2 \\ 0 & 0 & \ddots & \vdots & \vdots \\ 0 &  0 &  0 & a_{nn} &  \beta_n\\ \end{array}

The matrix is reduced to this form by the elementary row operations: swap two rows, multiply a row by a constant, add to one row a scalar multiple of another.
Our calculator gets the echelon form using sequential subtraction of upper rows A_i, multiplied by {a_{ji}} from lower rows A_j , multiplied by {a_{ii}}, where i - leading coefficient row (pivot row).
It is important to get a non-zero leading coefficient. If it becomes zero, the row gets swapped with a lower one with a non-zero coefficient in the same position.

Back substitution

During this stage the elementary row operations continue until the solution is found. Finally, it puts the matrix into reduced row echelon form:
\begin{array}{|cccc|c|}  1 &  0 &  ... &  0 &  \beta_1\\  0 &  1 &  \vdots &  0 &  \beta_2 \\ 0 & 0 & \ddots & \vdots & \vdots \\ 0 &  0 &  0 & 1 &  \beta_n\\ \end{array} ,

URL copied to clipboard
PLANETCALC, Gaussian elimination

Comments