This article is in continuation of the previous article: Introduction to Probability

As we defined earlier that the Probability is a likelihood of an event which helps us to predict randomness in our design. It is a function that maps an event to a real number.

Assigning Probability helps us to understand how mapping is done ? Our goal is to assign probabilities to events in such a way that this assignment represents the likelihood of occurrence of that event. One way to approach this problem is relative frequency approach which says that you perform an experiment large number of times and checking the desired output and taking average. Of course, this approach requires that the experiment should be repeatable. Desired output means that the same event occurs for which we are trying to calculate that probability. As an example if we let the experiment to be repeated [latex]n[/latex] times and [latex]n_A[/latex] is the number of times that event A has occurred then the probability of the event A can be assigned by using the following relation:

[latex]Pr(A) = \lim\limits_{n\to \infty} \frac{n_A}{n}[/latex]

This approach is used in Monte-Carlo simulations as well.

Let us see that this approach satisfies axioms of probability or not. See the problem below:

Problem: Demonstrate that the relative frequency approach to assigning probabilities satisfies the three axioms of probability

Solution: Rule-1: Probability is non-negative.

 For an event [latex]A[/latex] in [latex]S[/latex]. Assume that [latex]S[/latex] is Sample Space.

[latex]P(A)=\lim_{n\rightarrow\infty}\frac{n_{A}}{n}[/latex]

Since [latex]n_{A}\ge0[/latex], and [latex]n>0[/latex], [latex]P(A)\ge0[/latex].

Rule-3: Probability of Sample Space is 1.

[latex]S[/latex] is the sample space for the experiment. Since [latex]S[/latex] must happen with each run of the experiment, [latex]n_{S}=n[/latex]. Hence

[latex]P(S)=\lim_{n\rightarrow\infty}\frac{n_{S}}{n}=1[/latex]

Rule-4: A ∩ B = ∅ then Pr(A∪B) = P(A)+P(B)

Suppose [latex]A\cap B=0[/latex]. For an experiment that is run [latex]n[/latex] times, assume the event [latex]A\cup B[/latex] occurs [latex]n'[/latex] times, while [latex]A[/latex] occurs [latex]n_{A}[/latex] times and [latex]B[/latex] occurs [latex]n_{B}[/latex] times. Then we have [latex]n’=n_{A}+n_{B}[/latex]. Hence

\begin{eqnarray*}
P(A\cup B)=\lim_{n\rightarrow\infty}\frac{n’}{n}=\lim_{n\rightarrow\infty}\frac{n_{A}+n_{B}}{n}=\lim_{n\rightarrow\infty}\frac{n_{A}}{n}+\lim_{n\rightarrow\infty}\frac{n_{B}}{n}=P(A)+P(B)\ .
\end{eqnarray*}

Rule-5: This was extension of Rule-4 (please visit this article if you don’t remember what was Rule-5)

For an experiment that is run [latex]n[/latex] times, assume the event [latex]A_{i}[/latex] occurs [latex]n_{A_{i}}[/latex] times, [latex]i=1,2,\cdots[/latex]. Define event [latex]C=A_{1}\cup A_{2}\cdots\cup A_{i}\cdots[/latex]. Since any two events are mutually exclusive, event [latex]C[/latex] occurs [latex]\sum_{i=1}^{\infty}{n_{A_{i}}}[/latex] times. Hence,

\begin{eqnarray*}
P(\bigcup_{i=1}^{\infty}A_{i})=\lim_{n\rightarrow\infty}\frac{\sum_{i=1}^{\infty}{n_{A_{i}}}}{n}=\sum_{i=1}^{\infty}{\lim_{n\rightarrow\infty}\frac{n_{A_{i}}}{n}}=\sum_{i=1}^{\infty}{P(A_{i})}\ .
\end{eqnarray*}

Building blocks of Probability:

Joint Probability: [latex]P(A \cap B)[/latex] is called as joint probability it is also denoted as P(A,B) and is not only limited to two events. If A and B are mutually exclusive then their joint probability is 0. In terms of relative frequency approach we can define joint probability by giving an example.

Event A: { a person is a student}
Event B: { and he/she is below age of 20}

then [latex]P(A) = \frac{n_A}{n} \;[/latex] , [latex]P(B) = \frac{n_B}{n} \;[/latex] , then Joint Probability of these two events can be written as:

[latex]P(A,B) = \lim\limits_{n\to \infty} \frac{n_{A,B}}{n}[/latex]

where [latex]n_{A,B}[/latex] is the number of times a person is a student and he/she is below the age of 20.

Conditional Probability: There might be cases that occurance of an event is dependent on occurance of another event. As an example rain is dependent on the cloud coverage of an area, now in this example if we call rain an event A and cloud coverage as event B the probability of happening A is dependent on event B. We say that this type of probability is called as a conditional probability. It is denoted as P(A|B) and we can represent it in terms of joint probability as well:

[latex]P(A|B) = \frac{n_{A,B}}{n_B}[/latex] as event B is already occurred means clouds are already on sky.

[latex]P(A|B) = \frac{\frac{n_{A,B}}{n}}{\frac{n_B}{n}} = \frac{P(A,B)}{P(B)}[/latex]

Independence: Events A & B are independent if P(A|B) = P(A) or P(B|A) =P(B). This makes sense as well if we say that the event B as a rainy day and event A also rainy day, then of course there is no point of A happening for B to happen as well. Another example could be the tossing of a coin. If we assign A to heads and B to tails then A will always be independent of B as these two are mutually exclusive events.

Another important point here is to note that the joint probability of these two events can be obtained by multiplying them:

[latex]P(A,B)=P(A)P(B)[/latex]

We would like the reader to know that these definitions don’t violate the laws of probability to verify this see the proof below:

Problem: Demonstrate that the definition of conditional probability satisfies the three axioms of probability

Solution: Rule-1: Probability is non-negative

[latex]Pr(A\mid B)=\frac{Pr(A,B)}{Pr(B)}\ge0[/latex].  Since both Pr(A,B) and Pr(B) are greater than zero.

Rule-3: [latex]Pr(S\mid B)=\frac{Pr(S,B)}{Pr(B)}=\frac{Pr(B)}{Pr(B)}=1[/latex]

Rule-4: See the below verification:

\begin{eqnarray*}
Pr(A\cup B\mid C) & = & \frac{Pr((A\cup B)\cap C)}{Pr(C)}=\frac{Pr((A\cap C)\cup(B\cap C))}{Pr(C)}\\
& = & \frac{Pr(A\cap C)}{Pr(C)}+\frac{Pr(B\cap C)}{Pr(C)}-\frac{Pr(A\cap B\cap C)}{Pr(C)}\\
& = & Pr(A\mid C)+Pr(B\mid C)-Pr(A\cap B\mid C)
\end{eqnarray*}

Another example of these concepts is given below:

Example: Two six-sided (balanced) dice are thrown. Find the probabilities of each of the following events:

  1. a 5 does not occur on either throw;
  2. the sum is 7;
  3. a 5 and a 3 occur in any order;
  4. the first throw is a 5 and the second throw is a 5 or a 4;
  5. both throws are 5;
  6. either throw is a 6.

Solution: See the solution below:

  1. \begin{eqnarray*}
    Pr(5) & = & \frac{1}{6}\\
    Pr(\overline{5}) & = & \frac{5}{6}\\
    Pr(\overline{5},\overline{5}) & = & \frac{5}{6}\cdot\frac{5}{6}=\frac{25}{36}\\
    \end{eqnarray*}
  2. Pr([latex]sum=7[/latex]). The sum of 7 can occur in the following 6 possible ways.
    [latex]sum = \{(1;6), (2;5) , (3;4) , (4;5) , (5;2) , (6;1) \}[/latex]. And there are a total of 36 outcomes in the sample space.
    [latex]Pr(sum=7)=\frac{6}{36}=\frac{1}{6}[/latex]
  3. [latex]A= \{ (3;5) , (5;3) \}[/latex]
    [latex]Pr(A)=\frac{2}{36}=\frac{1}{18}[/latex]
  4. \begin{eqnarray*}
    Pr(A=5) & = & \frac{1}{6}\\
    Pr(B=(5\mid4)) & = & \frac{2}{6}\\
    Pr(A,B) & = & \frac{1}{6}\cdot\frac{2}{6}=\frac{1}{18}
    \end{eqnarray*}
  5. \begin{eqnarray*}
    Pr(5) & = & \frac{1}{6}\\
    Pr(5,5) & = & \frac{1}{6}\cdot\frac{1}{6}=\frac{1}{36}
    \end{eqnarray*}
  6. \begin{eqnarray*}
    Pr(A=6) & = & \frac{1}{6}\\
    Pr(B=6) & = & \frac{1}{6}\\
    Pr(A\cup B) & = & Pr(A)+Pr(B)-Pr(A\cap B)\\
    & = & \frac{1}{6}+\frac{1}{6}-\frac{1}{6}\cdot\frac{1}{6}\\
    & = & \frac{11}{36}
    \end{eqnarray*}

If you have any suggestions or questions, please leave the comment below.