# 4.1 Introduction to Probability and Random Variables

Learning Objectives

By the end of this chapter, the student should be able to:

- Understand the terminology and basic rules of probability
- Handle general discrete random variables
- Recognize and apply the binomial distribution
- Understand general continuous random variables
- Recognize and apply special cases of continuous random variables (uniform, normal)
- Use the normal distribution to approximate the binomial

More than likely, you have used probability. In fact, you probably have an intuitive sense of probability. Probability deals with the chance of an event occurring. Whenever you weigh the odds of whether or not to do your homework or to study for an exam, you are using probability. In this chapter, you will learn how to solve probability problems using a systematic approach.

# Probability

Probability is a measure that is associated with how certain we are of outcomes of a particular experiment or activity. An experiment is a planned operation carried out under controlled conditions. If the result is not predetermined, then the experiment is said to be a probability experiment. Flipping one fair coin twice is an example of an experiment.

A result of an experiment is called an outcome. The sample space of an experiment is the set of all possible outcomes. Three ways to represent a sample space are listing the possible outcomes, creating a tree diagram, or creating a Venn diagram. The uppercase letter *S* is used to denote the sample space. For example, if you flip one fair coin, *S* = {*H*, *T*} where *H* *(*heads) and *T* (tails) are the outcomes.

An event is any combination of outcomes. Upper case letters like *A* and *B* represent events. For example, if the experiment is to flip one fair coin, event *A* might be getting at most one head. The probability of an event *A* is written *P*(*A*).

The probability of any outcome is the long-term relative frequency of that outcome. Probabilities are between zero and one, inclusive (that is, zero, one, and all numbers between these values). *P*(*A*) = 0 means that event *A* can never happen. *P*(*A*) = 1 means that event *A* always happens. *P*(*A*) = 0.5 means that event *A* is equally likely to occur or not to occur. For example, if you flip one fair coin repeatedly (from 20 to 2,000 to 20,000 times), the relative frequency of heads approaches 0.5 (the probability of heads).

A probability model is a mathematical representation of a random process that lists all possible outcomes and assigns probabilities to each of them. This type of model is our ultimately our goal when moving forward in our study of statistics.

## The Law of Large Numbers

An important characteristic of probability experiments known as the law of large numbers states that, as the number of repetitions of an experiment increases, the relative frequency obtained in the experiment tends to become closer and closer to the theoretical probability. Even though the outcomes do not happen according to any set pattern or order, overall, the long-term observed relative frequency will approach the theoretical probability. (The word “empirical” is often used instead of the word “observed.”)

If you toss a coin and record the result, what is the probability that the result is heads? If you flip a coin two times, does probability tell you that these flips will result in one heads and one tail? You might toss a fair coin ten times and record nine heads. Probability does not describe the short-term results of an experiment; rather, it gives information about what can be expected in the long term. To demonstrate this, Karl Pearson once tossed a fair coin 24,000 times! He recorded the results of each toss, obtaining heads 12,012 times. In his experiment, Pearson illustrated the law of large numbers.

## The Axioms of Probability

Finding probabilities in more complicated situations starts with the three axioms of probability:

*P(S)*= 1- 0 ≤
*P(E)*≤ 1 - For each two events
*E*and_{1}*E*with_{2}*E*∩_{1}*E*= Ø,_{2}*P(E*U_{1}*E*) =_{2}*P(E*+_{1})*P(E*_{2})

The first two axioms should be fairly intuitive. Axiom 1 says that the probabilities of all outcomes in a sample space will always add up to 1. Axiom 2 says the probability of any event must be between 0 and 1. For now, the third axiom, called the disjoint addition rule, isn’t that important, but the upcoming ideas are based on the first two axioms.

# The Complement

Suppose we know the probability of an event occurring but want to know the probability it does not occur, or vice versa? We can easily find this from the first two axioms of probability.

We call all of the outcomes in a sample space that are NOT included in an event the complement of the event. The complement of event A is usually denoted by , *A′* (read “*A* prime”), or A^{C}.

There are several useful forms of the complement rule:

*P*(*A*) +*P*(*A′*) = 1*1 – P*(*A*) =*P*(*A′*)*1 – P*(*A’*) =*P*(*A*)

Example

If *S* = {1, 2, 3, 4, 5, 6} and *A* = {1, 2, 3, 4}, then *A′* = {5, 6}:

*P(A)* = , *P(A′)* = , and *P(A)* + *P(A′)* = = 1

# Random Variables

Random variables (RVs) are probability models quantifying situations. A random variable describes the outcomes of a statistical experiment in words or as a function that assigns each element of a sample space a unique real number. Uppercase letters such as *X* or *Y* typically denote a random variable. Lowercase letters like *x* or *y* denote a specific value of that random variable. If *X* is a random variable, then *X* is written in words, and *x* is given as a number. For example, the probability of the random variable X being equal to 3 is denoted as P(X=3).

There are both continuous and discrete random variables depending on the type of data that situation would produce. We will begin with discrete random variables (DRVs) and revisit continuous random variables (CRVs) in the future.

**Figure References**

Figure 4.1: Ed Sweeney (2009). *2009 Leonid Meteor.* CC BY 2.0. https://flic.kr/p/7girE8

The study of randomness; a number between zero and one, inclusive, that gives the likelihood that a specific event will occur

A random experiment where the result is not predetermined

A particular result of an experiment

The set of all possible outcomes of an experiment

An outcome or subset of outcomes of an experiment in which you are interested

A mathematical representation of a random process that lists all possible outcomes and assigns probabilities to each

As the number of trials in a probability experiment increases, the relative frequency of an event approaches the theoretical probability

The complement of an event consists of all outcomes in a sample space that are NOT in the event.

[["Amount of time spent on route (hours)\t","Frequency","Relative frequency","Cumulative relative frequency\n"],["2","12","0.30","0.30"],["3","14","0.35","0.65"],["4","10","0.25","0.90"],["5","4","0.10","1.00"]]