Law of Total Probability

In robotics literature, we observe endless number of equations written in probabilities. I’d like to point out few ones which I found typically useful to remember. You will find all these in any book with probability theory.

    \begin{equation*} \begin{aligned} p(x) &= \sum_y p(x, y) \\ &= p(x|y)\cdot p(y) \end{aligned} \end{equation*}

or

    \begin{equation*} \begin{aligned} p(x) &= \int p(x, y) dy \\ &= \int p(x|y) \cdot p(y) dy \end{aligned} \end{equation*}

depending on whether the random variable x is a continuous or discrete variable.

Marginalization

One can see that the law of total probability is a variant of marginalization, which states following:

    \[ p(x) = \sum_y p(x, y) \]

or

    \[ p(x) = \int p(x, y) dy \]

for discrete or continuous random variable x, respectively.

Product Rule

One will also get to use/see the product rule a lot.

    \begin{equation*} \begin{aligned} p(x,y) &= p(x|y) \cdot p(y) \\ &= p(y|x) \cdot p(x) \end{aligned} \end{equation*}

Note that the joint distribution of p(x,y) can be expressed in two different equations as shown above. And whenever you see a conditional distribution, remember that it can also be expressed differently using Bayes rule.

 

4 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *