# Starting Probability (Section 9.1)

Section:
9.1
Date:
Monday, February 21, 2011 - 12:00 - Wednesday, February 23, 2011 - 13:00
AttachmentSize
section-9-1.pdf58.03 KB
section-9-1.tex5.11 KB

Today, we started section 9.1. I passed out the attached PDF, which condenses the book's presentation a bit and helps organize some of the definitions.

----------------------------------

MATH1120–003 Spring 2011 University of Colorado

Section 9.1 Notes

Definitions (by order as found in text)

• experiment – An activity whose results can be observed and recorded. (e.g., flipping a coin two times)

• outcome – A possible result of an experiment. (e.g., “heads” in a single coin flip)

• sample space – A set, usually , of all possible outcomes for an experiment. (e.g., )

• event – Any subset of a sample space. (e.g., )

• empirical probability – When a probability is determined by viewing the results of experiments.

• theoretical probability – When a probability is determined mathematically, without experimentation.

• – The probability of outcome/event . We always have for all events .

• equally likely – When one outcome is just as likely as another (when their probabilities are the same).

• – The number of ways in which the event occurs in the sample space .

• uniform sample space – Each possible outcome in the sample space is equally likely.

• impossible event – An event that never occurs as an outcome in a sample space. ( )

• certain event – An event that is certain to occur, no matter what. ( )

• – The set with no elements, known as the empty set.

• mutually exclusive – Events and are mutually exclusive if they have no elements in common.

• complement – For an event , the complement is everything in the sample space that is not in .

Theorems and Their Uses (by order as found in text)

9.1 – Law of Large Numbers (Bernoulli's Theorem)

If an experiment is repeated a large number of times, the experimental probability of a particular outcome approaches a fixed number as the number of repetitions increases.

Idea: Probability is best measured when the experiments in question are repeated, their results being averaged.

Example: If you flip a coin three times, you may have heads land up each time and be lead to think that your coin will land on heads more often. You'd probably be wrong in thinking such a thing, and doing the experiment over and over will show you this.

9.2 –

If is any event and is the sample space, then .

Idea: Impossible events have a probability of 0, while certain events have a probability of 1. All other events will have probabilities between 0 and 1.

9.3 –

The probability of an event is equal to the sum of the probabilities of the disjoint outcomes making up the event.

Idea: If you roll a fair die and want to know the probability of rolling an odd number, you could roll a 1, a 3 or a 5. Consider the probabilities for each case individually and sum those: 9.4 –

If and are mutually exclusive, then Idea: See the previous example.

9.5 –

If is an event and is its complement, then Idea: The above equation can be rewritten in a number of ways, depending on specifically what one is looking for. The main idea is that an event either happens, or it does not happen. Considering those two as outcomes ( happens, and doesn't happen) yields a certain event (one of them happens).

Calculating Probabilities

• Probability of an Event with Equally Likely Outcomes – For an experiment with sample space with equally likely outcomes, the probability of an event is given by The main idea here is to take the total number of ways that the outcome occurs, and divide it by the total number of outcomes.

• – An impossible event never happens.

• – At least one event in occurs.

• – For any event , probability of is always between 0 and 1.

• Union of events – If and are events, then . (explained in class)

• Union of mutually exclusive events – If and are mutually exclusive, then .

• Complement – If is any event, then .