Skip to content
Sahithyan's S2
Sahithyan's S2 — Methods of Mathematics

System of Linear Equations

A linear equation is an equation that may be put in the form:

a11x1+a12x2+a13x3++a1nxn=b1a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + \cdots + a_{1n}x_n = b_1

where x1,x2,x3,,xnx_1, x_2, x_3, \cdots, x_n are variables, aa’s are coefficients, and b1b_1 is constant.

A system of linear equations (or linear system) is a collection of linear equations involving the same set of variables.

A general system of nn linear equations with nn unknowns can be written as:

a11x1+a12x2+a13x3++a1nxn=b1a21x1+a22x2+a23x3++a2nxn=b2a31x1+a32x2+a33x3++a3nxn=b3an1x1+an2x2+an3x3++annxn=bn\begin{aligned} a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + \cdots + a_{1n}x_n &= b_1 \\ a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + \cdots + a_{2n}x_n &= b_2 \\ a_{31}x_1 + a_{32}x_2 + a_{33}x_3 + \cdots + a_{3n}x_n &= b_3 \\ \vdots \\ a_{n1}x_1 + a_{n2}x_2 + a_{n3}x_3 + \cdots + a_{nn}x_n &= b_n \end{aligned}

It can also be written in matrix form:

AX=BA \vec{X} = \vec{B}

where AA is called the coefficient matrix, X\vec{X} is a variable or unknown matrix, and B\vec{B} is a constant matrix.

(a11a12a13a1na21a22a23a2na31a32a33a3nan1an2an3ann)(x1x2x3xn)=(b1b2b3bn)\begin{pmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & a_{n3} & \cdots & a_{nn} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \\ \vdots \\ b_n \end{pmatrix}

Iterative Techniques

The solution is guessed and then iteratively improved.

Jacobi Method

A slow method for solving systems of linear equations iteratively. An initial guess is made for the solution vector X(0)\vec{X}^{(0)}. Then iteratively, subsequent approximations are made using the equation: (xix_i is made subject in ii-th equation).

xi=(j=1,jinaijxjaii)+biaii,i=1,2,,nx_i = \left(\sum_{j=1, j \neq i}^{n} -\frac{a_{ij}x_j}{a_{ii}} \right) + \frac{b_i}{a_{ii}}, \quad i = 1, 2, \ldots, n

For each k1k \geq 1, generate the components xi(k)x_i^{(k)} of X(k)\vec{X}^{(k)} using the components of previous iteration X(k1)\vec{X}^{(k-1)} for each i=1,2,,ni = 1, 2, \ldots, n using:

xi(k)=1aii[j=1,jinaijxj(k1)+bi]x_i^{(k)} = \frac{1}{a_{ii}} \left[ -\sum_{j=1, j \neq i}^{n} a_{ij}x_j^{(k-1)} + b_i \right]

Cannot be used for singular matrix.

Relative error is given by:

Relative error=X(k)X(k1)X(k)\text{Relative error} = \frac {\lVert \vec{X}^{(k)} \rVert - \lVert \vec{X}^{(k - 1)} \rVert} {\lVert \vec{X}^{(k)} \rVert}

Here any convenient matrix norms can be used. Usually infinity norm is used.

l=max{j=1naij  ;  i[1,n]}l_\infty = \max \bigg\{ \sum_{j=1}^{n} {\lvert a_{ij} \rvert} \;;\; i \in [1,n] \bigg\}

Gauss-Seidel Method

An improvement over the Jacobi method. Major difference from Jacobi method is instead of using values from the previous iteration, here newest approximations for xix_i are used.

When computing xi(k)x_i^{(k)}, the components x1(k),,xi1(k)x_1^{(k)}, \ldots, x_{i-1}^{(k)} would have already been computed. They will be used to compute the xi(k)x_i^{(k)} as follows for each i=1,2,,ni = 1, 2, \ldots, n.

xi(k)=1aii[bij=1i1aijxj(k)j=i+1naijxj(k1)]x_i^{(k)} = \frac{1}{a_{ii}} \left[ b_i -\sum_{j=1}^{i-1} a_{ij}x_j^{(k)} - \sum_{j=i+1}^{n} a_{ij}x_j^{(k-1)} \right]

Convergence of iterative methods

Theorem 1

Suppose X(0)Rn\vec{X}^{(0)} \in R^n. And X(k)=TX(k1)+C,for each k1\vec{X}^{(k)} = T \vec{X}^{(k-1)} + \vec{C}, \quad \text{for each } k \geq 1.

The sequence X(k)k=0{\vec{X}^{(k)}}_{k=0}^\infty converges to the unique solution of X=TX+C\vec{X} = T\vec{X} + \vec{C} iff ρ(T)<1\rho(T) < 1. Here ρ\rho denotes the spectral radius.

If T<1||T|| < 1 for any natural matrix norm and C\vec{C} is a given vector, then for any X(0)Rn\vec{X}^{(0)} \in R^n, the sequence converges to a vector XRn\vec{X} \in R^n, with X=TX+C\vec{X} = T\vec{X} + \vec{C}, and the following error bounds hold:

XX(k)Tk    XX(0)\lVert\vec{X} - \vec{X}^{(k)}\lVert \leq \lVert T\lVert^k \;\cdot\; \lVert\vec{X} - \vec{X}^{(0)}\lVert XX(k)Tk1TX(1)X(0)\lVert\vec{X} - \vec{X}^{(k)}\rVert \leq \frac{\lVert T\rVert^k}{1 - \lVert T\rVert} \lVert\vec{X}^{(1)} - \vec{X}^{(0)}\rVert

Theorem 2

If AA is strictly diagonally dominant, then for any choice of X(0)X^{(0)}, both Jacobi and Gauss-Seidel methods gives sequences X(k)k=0{\vec{X}^{(k)}}_{k=0}^\infty that converge to the unique solution of AX=BA\vec{X} = \vec{B}.