Skip to content
Sahithyan's S2
Sahithyan's S2 — Methods of Mathematics

Introduction to Numerical Methods

Round-Off Errors

The error that is produced when a calculator or computer is used to perform real number calculations. Real numbers are typically represented in computers using floating-point form.

Machine Numbers

Represented in the normalized decimal floating-point form.

±  0.d1d2dk×10n    where    1d19, and 0di9\pm\;0.d_{1}d_{2}\ldots d_{k}\times 10^{n}\;\;\text{where}\;\;1\leq d_{1}\leq 9,\ \text{and}\ 0 \leq d_{i}\leq 9

Aka. kk-digit decimal machine numbers. Any positive real number can be written in the above form.

Floating point form

Denoted as fl(y)fl(y). Obtained by terminating the mantissa of yy at kk decimal digits. The termination can be done in 2 ways:

  • chopping: chop off the digits dk+1dk+2d_{k+1}d_{k+2} \dots to produce the floating-point form
  • rounding: when dk+15d_{k+1} ≥ 5, add 11 to dkd_k to obtain fl(y)fl(y) and then chop off all but the first k digits.

Measuring Errors

Suppose p*p is an approximation of pp.

Absolute Error

pp\lvert\, p - *p\,\rvert

Relative Error

ppp\bigg\lvert{\frac{p - *p}{p}}\bigg\rvert

Successive relative error

When pp is unknown and pp* is found through iterations, the relative error can be used as:

pn(x)pn1(x)pn(x)\left\lvert \frac{p_n(x) - p_{n-1}(x)}{p_n(x)} \right\rvert

Here pn(x)p_n(x) means the nn-th approximation of pp.

Finite-Digit Arithmetic

Machine arithmetic are done on finite-digits and are not exact. Suppose ,,,\oplus, \ominus, \otimes, \oslash are machine addition, subtraction, multiplication and division.

xy=fl(fl(x)+fl(y))x \oplus y = f_l(f_l(x) + f_l(y)) xy=fl(fl(x)fl(y))x \ominus y = f_l(f_l(x) - f_l(y)) xy=fl(fl(x)×fl(y))x \otimes y = f_l(f_l(x) \times f_l(y)) xy=fl(fl(x)/fl(y))x \oslash y = f_l(f_l(x)/f_l(y))

Due to this, the accuracy is lost to some extent. The accuracy can be increased by rearranging calculations.

Truncating Error

Occurs because of using approximation in place of an exact mathematical procedure. For example, the error due to the approximation of exe^x for the n-th term in its Taylor expansion.

Algorithm

An algorithm is a set of well-defined instructions to solve a problem.

Stable

If a small change in the input causes a small change in the output, the algorithm is stable.

Unstable

When a algorithm is not stable.

Conditionally Stable

When an algorithm is stable only within a certain input range.

Growth of Error

Suppose E0>0E_0 \gt 0 denotes an error introduced in a calculation. EnE_n represents the error after nn subsequent operations.

Linear growth

When EnCnE0E_n \approx CnE_0 and CC is a constant independent of nn.

Exponential growth

When EnCnE0E_n \approx C^nE_0 for some C>1C \gt 1.

Rate of convergence

A measure of how fast a sequence is converging.

Suppose {βn}\set{\beta_n} converges to 00 and {αn}\set{\alpha_n} converges to a number α\alpha.

If K>0\exists K \gt 0 such that,

αnαKβn,    for large  n\lvert \alpha_n - \alpha \rvert \le K\lvert \beta_n \rvert,\;\;\text{for large}\;n

Then we say that αn{\alpha_n} converges to α\alpha with rate of (or order of) convergence O(βn)O(\beta_n). It is written as αn=α+O(βn)\alpha_n = \alpha + O(\beta_n).

For limits

Suppose limh0G(h)=0\lim_{h \to 0} G(h) = 0 and limh0F(h)=L\lim_{h \to 0} F(h) = L.

If K>0\exists K \gt 0 such that,

F(h)LKG(h),    For small  h\lvert F(h) - L \rvert \le K \lvert G(h) \rvert,\;\;\text{For small}\;h

Then F(h)=L+O(G(h))F(h) = L + O(G(h)).

Numerical solution of non-linear equations

Non-linear function is a function whose graph is not a straight line.

In many situations, non-linear equations cannot be solved analytically. In that case, the solutions of the equations must be approached using iterative methods. The principle of these methods of solving consists in starting from an arbitrary point, the closest possible point to the solution sought, and involves arriving at the solution gradually through successive tests.

The 2 criteria to take into account when choosing a method for solving non-linear equations are:

  • Method convergence (conditions of convergence, speed of convergence etc.).
  • The cost of calculating of the method.