Topics: Statistics
(definition)
Let be an estimator of parameter .
The mean squared error of is the expected value of the square of the difference between and .
That is:
(theorem)
We also have that:
(see bias)
Proof
Notice that:
Search
Topics: Statistics
(definition)
Let θ^n be an estimator of parameter θ.
The mean squared error of θ^ is the expected value of the square of the difference between θ^ and θ.
That is:
(theorem)
We also have that:
(see bias)
Proof
Notice that:
MSE(θ^n)=E[(θ^−θ)2]=E[θ^2−2θθ^+θ2]=E[θ^2]−2θE[θ^]+θ2=E[θ^2]−(E[θ^])2+(E[θ^])2−2θE[θ^]+θ2=Var(θ^)+(E[θ^]−θ)2=Var(θ^)−(B(θ^))2