Law of total expectation

From The Right Wiki
Jump to navigationJump to search

The proposition in probability theory known as the law of total expectation,[1] the law of iterated expectations[2] (LIE), Adam's law,[3] the tower rule,[4] and the smoothing theorem,[5] among other names, states that if X is a random variable whose expected value E(X) is defined, and Y is any random variable on the same probability space, then

E(X)=E(E(XY)),

i.e., the expected value of the conditional expected value of X given Y is the same as the expected value of X. The conditional expected value E(XY), with Y a random variable, is not a simple number; it is a random variable whose value depends on the value of Y. That is, the conditional expected value of X given the event Y=y is a number and it is a function of y. If we write g(y) for the value of E(XY=y) then the random variable E(XY) is g(Y). One special case states that if {Ai} is a finite or countable partition of the sample space, then

E(X)=iE(XAi)P(Ai).

Example

Suppose that only two factories supply light bulbs to the market. Factory X's bulbs work for an average of 5000 hours, whereas factory Y's bulbs work for an average of 4000 hours. It is known that factory X supplies 60% of the total bulbs available. What is the expected length of time that a purchased bulb will work for? Applying the law of total expectation, we have:

E(L)=E(LX)P(X)+E(LY)P(Y)=5000(0.6)+4000(0.4)=4600

where

  • E(L) is the expected life of the bulb;
  • P(X)=610 is the probability that the purchased bulb was manufactured by factory X;
  • P(Y)=410 is the probability that the purchased bulb was manufactured by factory Y;
  • E(LX)=5000 is the expected lifetime of a bulb manufactured by X;
  • E(LY)=4000 is the expected lifetime of a bulb manufactured by Y.

Thus each purchased light bulb has an expected lifetime of 4600 hours.

Informal proof

When a joint probability density function is well defined and the expectations are integrable, we write for the general case E(X)=xPr[X=x]dxE(XY=y)=xPr[X=xY=y]dxE(E(XY))=(xPr[X=xY=y]dx)Pr[Y=y]dy=xPr[X=x,Y=y]dxdy=x(Pr[X=x,Y=y]dy)dx=xPr[X=x]dx=E(X). A similar derivation works for discrete distributions using summation instead of integration. For the specific case of a partition, give each cell of the partition a unique label and let the random variable Y be the function of the sample space that assigns a cell's label to each point in that cell.

Proof in the general case

Let (Ω,,P) be a probability space on which two sub σ-algebras 𝒢1𝒢2 are defined. For a random variable X on such a space, the smoothing law states that if E[X] is defined, i.e. min(E[X+],E[X])<, then

E[E[X𝒢2]𝒢1]=E[X𝒢1](a.s.).

Proof. Since a conditional expectation is a Radon–Nikodym derivative, verifying the following two properties establishes the smoothing law:

  • E[E[X𝒢2]𝒢1] is 𝒢1-measurable
  • G1E[E[X𝒢2]𝒢1]dP=G1XdP, for all G1𝒢1.

The first of these properties holds by definition of the conditional expectation. To prove the second one,

min(G1X+dP,G1XdP)min(ΩX+dP,ΩXdP)=min(E[X+],E[X])<,

so the integral G1XdP is defined (not equal ). The second property thus holds since G1𝒢1𝒢2 implies

G1E[E[X𝒢2]𝒢1]dP=G1E[X𝒢2]dP=G1XdP.

Corollary. In the special case when 𝒢1={,Ω} and 𝒢2=σ(Y), the smoothing law reduces to

E[E[XY]]=E[X].

Alternative proof for E[E[XY]]=E[X]. This is a simple consequence of the measure-theoretic definition of conditional expectation. By definition, E[XY]:=E[Xσ(Y)] is a σ(Y)-measurable random variable that satisfies

AE[XY]dP=AXdP,

for every measurable set Aσ(Y). Taking A=Ω proves the claim.

See also

References

  1. Weiss, Neil A. (2005). A Course in Probability. Boston: Addison–Wesley. pp. 380–383. ISBN 0-321-18954-X.
  2. "Law of Iterated Expectation | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2018-03-28.
  3. "Adam's and Eve's Laws". Adam and Eve's laws (Shiny app). 2024-09-15. Retrieved 2022-09-15.
  4. Rhee, Chang-han (Sep 20, 2011). "Probability and Statistics" (PDF).
  5. Wolpert, Robert (November 18, 2010). "Conditional Expectation" (PDF).