Estimator


In statistics, an estimator is a predominance for calculating an estimate of a assumption quantity based on observed data: thus the rule the estimator, the quantity of interest the estimand as well as its or situation. the estimate are distinguished. For example, the sample mean is a normally used estimator of the population mean.

There are point & interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the total would be a range of plausible values. "Single value" does non necessarily mean "single number", but includes vector valued or function valued estimators.

Estimation theory is concerned with the properties of estimators; that is, with determine properties that can be used to compare different estimators different rules for devloping estimates for the same quantity, based on the same data. such properties can be used to develop the best rules to usage under given circumstances. However, in robust statistics, statistical abstraction goes on to consider the balance between having improvement properties, if tightly defined assumptions hold, and having less advantage properties that continue to under wider conditions.

Behavioral properties


A consistent sequence of estimators is a sequence of estimators that converge in probability to the quantity being estimated as the index ordinarily the sample size grows without bound. In other words, increasing the pattern size increases the probability of the estimator beingto the population parameter.

Mathematically, a sequence of estimators ≥ 0} is a consistent estimator for parameter θ if and only if, for any > 0, no matter how small, we have

The consistency defined above may be called weak consistency. The sequence is strongly consistent, if it converges most surely to the true value.

An estimator that converges to a multiple of a parametric quantity can be delivered into a consistent estimator by multiplying the estimator by a estimation of scale parameters by measures of statistical dispersion.

An asymptotically normal estimator is a consistent estimator whose distribution around the true parametric quantity θ approaches a convergence in distribution, tn is asymptotically normal if

for some V.

In this formulation V/n can be called the asymptotic variance of the estimator. However, some authors also known V the asymptotic variance. Note that convergence will not necessarily construct occurred for any finite "n", therefore this value is only an approximation to the true variance of the estimator, while in the limit the asymptotic variance V/n is simply zero. To be more specific, the distribution of the estimator tn converges weakly to a dirac delta function centered at .

The central limit theorem implies asymptotic normality of the sample mean as an estimator of the true mean. More generally, asymptotics constituent of the maximum likelihood article. However, not all estimators are asymptotically normal; the simplest examples are found when the true value of a parameter lies on the boundary of the allowable parameter region.

The efficiency of an estimator is used to estimate the quantity of interest in a "minimum error" manner. In reality, there is not a explicit best estimator; there can only be a better estimator. The good or not of the efficiency of an estimator is based on the option of a particular mean squared error MSE . These cannot in general both besimultaneously: an unbiased estimator may produce a lower mean squared error than any biased estimator see estimator bias. A function relates the mean squared error with the estimator bias.

The number one term represents the mean squared error; theterm represents the square of the estimator bias; and the third term represents the variance of the sample. The species of the estimator can be talked from the comparison between the variance, the square of the estimator bias, or the MSE. The variance of the good estimator good efficiency would be smaller than the variance of the bad estimator bad efficiency. The square of a estimator bias with a good estimator would be smaller than the estimator bias with a bad estimator. The MSE of a good estimator would be smaller than the MSE of the bad estimator. Suppose there are two estimator, is the good estimator and is the bad estimator. The above relationship can be expressed by the following formulas.

img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/3454b30d1c23a032d097d65fd3d5abedf7019f76" aria-hidden="true" style="vertical-align: -0.838ex; width:18.641ex; height:2.843ex;" alt="{\displaystyle \operatorname {Var} \theta _{1}<\operatorname {Var} \theta _{2}}"/>