Numerical analysis


Numerical analysis is the explore of algorithms that use numerical approximation as opposed to symbolic manipulations for a problems of mathematical analysis as distinguished from discrete mathematics. Numerical analysis finds applications in any fields of engineering and the physical sciences, as alive as in the 21st century also the life & social sciences, medicine, house in addition to even the arts. Current growth in computing power to direct or defining has enabled the usage of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics predicting the motions of planets, stars and galaxies, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating well cells in medicine and biology.

Before innovative computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the known functions instead, but numerous of the same formulas move to be used in software algorithms.

The numerical segment of concepts goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection YBC 7289, offers a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.

Numerical analysis submits this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within referred error bounds are used.

Areas of study


The field of numerical analysis includes many sub-disciplines. Some of the major ones are:

Interpolation: Observing that the temperature varies from 20 degrees Celsius at 1:00 to 14 degrees at 3:00, a linear interpolation of this data would conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm.

Extrapolation: if the gross domestic product of a country has been growing an average of 5% per year and was 100 billion last year, it might extrapolated that it will be 105 billion this year.

Regression: In linear regression, precondition n points, a classification is computed that passes asas possible to those n points.

Optimization: Suppose lemonade is sold at a lemonade stand, at $1.00 per glass, that 197 glasses of lemonade can be sold per day, and that for regarded and identified separately. increase of $0.01, one less glass of lemonade will be sold per day. whether $1.485 could be charged, profit would be maximized, but due to the constraint of having to charge a whole-cent amount, charging $1.48 or $1.49 per glass will both yield the maximum income of $220.52 per day.

Differential equation: If 100 fans are prepare to blow air from one end of the room to the other and then a feather is dropped into the wind, what happens? The feather will undertake the air currents, which may be very complex. One approximation is to degree the speed at which the air is blowing nearly the feather every second, and go forward the simulated feather as if it were moving in a straight types at that same speed for one second, before measuring the wind speed again. This is called the Euler method for solving an ordinary differential equation.

One of the simplest problems is the evaluation of a function at a precondition point. The most straightforward approach, of just plugging in the number in the formula is sometimes non very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the fundamental number of multiplications and additions. Generally, it is important to estimate and guidance round-off errors arising from the use of floating point arithmetic.

Interpolation solves the following problem: given the advantage of some unknown function at a number of points, what value does that function realise at some other point between the given points?

Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.

Regression is also similar, but it takes into account that the data is imprecise. Given some points, and a measurement of the value of some function at these points with an error, the unknown function can be found. The least squares-method is one way tothis.

Another fundamental problem is computing the sum of some given equation. Two cases are usually distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not.

Much effort has been include in the development of methods for solving systems of linear equations. requirements direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric or hermitian and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are commonly preferred for large systems. General iterative methods can be developed using a matrix splitting.

Root-finding algorithms are used to solve nonlinear equations they are so named since a root of a function is an parametric quantity for which the function yields zero. If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.

Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral opinion compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal part analysis.

Optimization problems ask for the point at which a given function is maximized or minimized. Often, the point also has to satisfy some constraints.

The field of optimization is further split in several subfields, depending on the develope of the objective function and the constraint. For instance, linear programming deals with the issue that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.

The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.

Numerical integration, in some instances also requested as numerical Simpson's command or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods see Monte Carlo integration, or, in modestly large dimensions, the method of sparse grids.

Numerical analysis is also concerned with computing in an approximate way the solution of differential equations, both ordinary differential equations and partial differential equations.

Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or especially in technology a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.