System of nonlinear equations solver

This online calculator attempts to find the numeric solution to a system of nonlinear equations using the method of coordinate descent

This page exists due to the efforts of the following people:



Created: 2023-11-17 18:14:32, Last updated: 2023-11-18 07:33:43

Equations are specified using formulas, which can involve mathematical operations, constants and mathematical functions, one formula per line. The syntax of the formulas is described below the calculator. Description of the coordinate descent optimization algorithm can also be found there.

PLANETCALC, System of nonlinear equations solver

System of nonlinear equations solver

Space separated list of variable values
Digits after the decimal point: 5
The file is very large. Browser slowdown may occur during loading and creation.
The file is very large. Browser slowdown may occur during loading and creation.

Formula Syntax

Multiple variables (denoted as x1, x2, etc.), the number pi ( pi), exponent (e), and the following mathematical operators are allowed in a formula:

+ - addition
- - subtraction
* - multiplication
/ - division
^ - increase in degree

Also you can use the following functions:

sqrt - square root
rootN - N th root, e.g. root3(x) - cube root
exp - exponential function
lb - binary logarithm ( base 2 )
lg - decimal logarithm ( base 10 )
ln - natural logarithm ( base e)
logB - logarithm to the base B , e.g. log7(x) - logarithm to the base 7
sin - sine
cos - cosine
tan - tangent
cot - cotangent
sec - secant
cosec - cosecant
arcsin - arcsine
arccos - arccosine
arctan - arctangent
arccotan - arccotangent
arcsec - arcsecant
arccosec - arccosecant
versin - versine
vercos - coversine
haversin - haversine
exsec - exsecant
excsc - excosecant
sh - hyperbolic sine
ch - hyperbolic cosine
tanh - hyperbolic tangent
coth - hyperbolic cotangent
sech - hyperbolic secant
csch - hyperbolic cosecant
abs - absolute value (modulus)
sgn - signum

Solving Systems of Nonlinear Equations

The calculator above uses a numerical method to find a solution to a system of nonlinear equations. Here are a few definitions to make it clearer.

A nonlinear equation is an equation of the form \varphi \left(x\right)=0, where \varphi \left(x\right) is some nonlinear function. A nonlinear function is anything other than the form y = kx + b.
Nonlinear equations come in algebraic and transcendental forms. The general form of algebraic equations is a_0 + a_1x + a_2x^2 + \dots + a_nx^n = 0. Transcendental equations use functions like exponent, sine, logarithm, etc.

Methods for solving such equations are divided into exact, when you can find an analytical solution, i.e., when you can write down the solution as a formula (e.g., the formula for finding the roots of a quadratic equation) and iterative (or numerical). It is known1 that there is no analytic solution for algebraic equations with the degree higher than 4. In the general case there is no analytical solution for transcendental equations either, thus for most cases the solution can only be found by numerical methods. For numerical methods, the accuracy is predetermined, and the roots of the equation are searched for with the given accuracy.

A system of nonlinear equations is a system of the form
\varphi_1 \left(x_1, x_2, ..., x_n\right)=0\\\\\varphi_2 \left(x_1, x_2, ..., x_n\right)=0\...\\\\\varphi_n \left(x_1, x_2, ..., x_n\right)=0.
The solution of such a system is a vector X of dimension n.

Numerical methods for solving systems of nonlinear equations are, for example, the method of simple iterations (or Jacobi method) and Newton method. The method of simple iterations requires transformation of the original equations and calculation of the norm of the Jacobi matrix, while Newton's method (also iterative) requires calculation of the inverse Jacobi matrix. Thus, at each iteration step, one has to perform quite a lot of calculations. However, there is another family of methods for solving systems of nonlinear equations, the so-called optimization methods, one of which, the coordinate descent method, is used in the calculator above.

The idea behind optimization methods is to replace the initial root-finding problem with an optimization problem. A functional F is created from the initial system of nonlinear equations:
F(x_1, x_2, ..., x_n)=\varphi_1^2 \left(x_1, x_2, ..., x_n\right)+\varphi_2^2 \left(x_1, x_2, ..., x_n\right)+\varphi_n^2 \left(x_1, x_2, ..., x_n\right)
and the minimization problem is solved
F(x_1, x_2, ..., x_n)=min \equiv 0

It can be seen that unlike the solution of the system of equations, the solution to the minimization problem will be found in any case, even if it is not equal to zero ("stuck" in the local minimum). There are quite a lot of optimization methods, here we use probably one of the simplest methods - the method of coordinate descent.

Algorithm of the method:

  1. The initial accuracy ε is set
  2. An initial approximation X₀ is chosen, e.g., the zero vector. If there are multiple roots, the choice of initial approximation actually determines which root will be found.
  3. A new local minimum is sought at one of the coordinates
  4. A new vector Xᵢ is formed
    X_i = (x_{1_i}, x_{2_i}, ..., x_{n_i})
  5. The termination condition is checked. If
    \left|F(X_i) - F(X_{i-1})\right| < \epsilon
    then the solution with the required accuracy is found, otherwise another coordinate is chosen and the local minimum is searched again (transition to step 3).

  1. Abel's theorem on the insolubility of equations in radicals 

URL copied to clipboard
PLANETCALC, System of nonlinear equations solver