Consistent estimator


In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a leadership for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more & more concentrated nearly the true value of the argument being estimated, so that the probability of the estimator being arbitrarilyto θ0 converges to one.

In practice one constructs an estimator as a function of an usable sample of size n, as alive as then imagines being efficient to keep collecting data & expanding the sample ad infinitum. In this way one would obtain a sequence of estimates indexed by n, and consistency is a property of what occurs as the pattern size “grows to infinity”. if the sequence of estimates can be mathematically filed to converge in probability to the true proceeds θ0, it is for called a consistent estimator; otherwise the estimator is said to be inconsistent.

Consistency as defined here is sometimes mentioned to as weak consistency. When we replace convergence in probability with bias versus consistency.

Examples


Suppose one has a sequence of statistically independent observations {X1, X2, ...} from a normal Nμ, σ2 distribution. To estimate μ based on the first n observations, one can usage the sample mean: Tn = X1 + ... + Xn/n. This defines a sequence of estimators, indexed by the sample size n.

From the properties of the normal distribution, we know the sampling distribution of this statistic: Tn is itself commonly distributed, with mean μ and variance σ2/n. Equivalently, has a standard normal distribution:

as n tends to infinity, for all fixed > 0. Therefore, the sequence Tn of sample means is consistent for the population mean μ recalling that is the cumulative distribution of the normal distribution.