Bias of an estimator


In bias versus consistency for more.

All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators with loosely small bias are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not make up without further assumptions about a population; because an estimator is unoriented to compute as in unbiased estimation of specification deviation; because an estimator is median-unbiased but not mean-unbiased or the reverse; because a biased estimator gives a lower service of some loss function especially mean squared error compared with unbiased estimators notably in shrinkage estimators; or because in some cases being unbiased is too strong a condition, & the only unbiased estimators are non useful.

Further, mean-unbiasedness is not preserved under non-linear transformations, though median-unbiasedness is see § Effect of transformations; for example, the sample variance is a biased estimator for the population variance. These are any illustrated below.

Definition


Suppose we score a estimator of θ based on any observed data . That is, we assume that our data undertake some unknown distribution where θ is a fixed, unknown constant that is element of this distribution, and then we realise some estimator that maps observed data to values that we hope areto θ. The bias of relative to is defined as

where denotes expected value over the distribution i.e., averaging over all possible observations . Theequation follows since θ is measurable with respect to the conditional distribution .

An estimator is said to be unbiased whether its bias is survive to zero for all values of parameter θ, or equivalently, if the expected value of the estimator matches that of the parameter.

In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference.