Some Basic Ideas in Probability

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible. --Pierre Simon Marquis de Laplace, A Philosophical Essay on Probabilities (page 6 of the translation published by Dover, 1951).

The table below gives the basic terminology of probability theory in a non-rigorous, informal way.

Term

Meaning

Example 1

Example 2

Experiment A probabilistic experiment is simply the act doing something and noting the outcome. Flip a coin three times and observe the pattern of heads and tails. Blindly throw a dart at a wall that is painted part red, part green, part blue, part white.
Outcome The outcomes of an experiment are the results that can occur, described as specifically as possible. HHH, HHT, HTH, HTT, THH, THT, TTH, TTT Red, Green, Blue, White
Event A set of outcomes. "two tails" = {HHT, HTH, THH} "dart hits a primary color" = {Red, Green, Blue}
Sample Space The set of all possible outcomes. {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} {Red, Green, Blue, White}
Probability of an event. The relative frequency of the event, when the experiment is performed many many times. Assuming the coin is fair, each of the outcomes will occur roughly the same number of times in many repeats of the experiment. The probability of "two tails" will be 3/8. The probabilities of the different outcomes will depend on how much of the wall is covered by each color.

 

Earlier in this course, we introduced the concept of a variable (something you go measure or observe on various occasions) and a distribution (a record of the frequencies, or relative frequencies of the different possible values of the variable). This terminology works very nicely with the language of probability, as we now explain. Recall that whenever we have a variable, we also have a reference class. This is the class of all things, individuals or events where it makes sense to measure the variable. For example, if the variable is "marital status", it makes sense for people, but not for animals or things, since they don't get married, and are never said to be "single". So the reference class is "all people". Every variable also has a set of possible values. Recall that the possible values must be non-overlapping. That is, we get one and only one of the possible values every time we measure the variable.


 

Now here is how this connects:

You can view the outcome of an experiment as a variable.

The collection of all the instances when the experiment is performed is the reference class of this variable. (Think of all possible times the experiment is performed, past, present, future. This takes a little imagination, but it's by no means an unusual kind of fantasy. Imagine a genie with three fair coins that flips them forever.)

The sample space gives you the set all possible values.

The probability of an outcome is the relative frequency of that outcome among all instances when the experiment is performed.

A record of the probability of each outcome, is the (relative frequecy) distribution of the variable.