You are on page 1of 6

2/4/2018 CME 106 - Cheatsheet

CME 106 - Introduction to Probability and Statistics for Engineers (https://stanford.edu/%7Eshervine/teaching/cme-106.html)

Cheatsheet

Table of Contents
Introduction to Probability and Combinatorics
Sample space, event, axioms of probability, permutation, combination

Conditional Probability
Bayes' rule, partition, independence

Random Variables
Cumulative distribution function, probability density function

Expectation and Moments of the Distribution


Expectation, moments of distribution functions, variance, standard deviation, transformation of random variables, characteristic
function

Discrete Probability Distributions


Chebyshev's inequality, Binomial distribution, Poisson distribution

Continuous Probability Distributions


Uniform distribution, Gaussian distribution

Jointly Distributed Random Variables


Joint probability, marginal density, independence, covariance, correlation

Introduction to Probability and Combinatorics


Sample space ― The set of all possible outcomes of an experiment is known as the sample space of the experiment and is denoted
by S .

Event ― Any subset E of the sample space is known as an event. That is, an event is a set consisting of possible outcomes of the
experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.

Axioms of probability For each event E, we denote P (E) as the probability of event E occuring.

Axiom 1 ― Every probability is between 0 and 1 included, i.e:

0 ⩽ P (E) ⩽ 1

Axiom 2 ― The probability that at least one of the elementary events in the entire sample space will occur is 1, i.e:

P (S) = 1

Axiom 3 ― For any sequence of mutually exclusive events E 1, . . . , En , we have:

n n

P ( ⋃ Ei ) = ∑ P (Ei )

i=1 i=1

https://stanford.edu/~shervine/teaching/cme-106/cheatsheet.html 1/6
2/4/2018 CME 106 - Cheatsheet

Permutation ― A permutation is an arrangement of r objects from a pool of n objects, in a given order. The number of such
arrangements is given by P (n, r), defined as:

n!
P (n, r) =
(n − r)!

Combination ― A combination is an arrangement of r objects from a pool of n objects, where the order does not matter. The
number of such arrangements is given by C(n, r), defined as:

P (n, r) n!
C(n, r) = =
r! r!(n − r)!

Remark: we note that for 0 ⩽ r ⩽ n , we have P (n, r) ⩾ C(n, r)

Conditional Probability
Bayes' rule ― For events A and B such that P (B) > 0 , we have:

P (B|A)P (A)
P (A|B) =
P (B)

Remark: we have P (A ∩ B) = P (A)P (B|A) = P (A|B)P (B)

Partition ― Let {A i, i ∈ [[1, n]]} be such that for all i, A i ≠ ∅ . We say that {A i} is a partition if we have:

∀i ≠ j, Ai ∩ Aj ≠ ∅  and  ⋃ Ai = S

i=1

Remark: for any event B in the sample space, we have P (B) = ∑ P (B|Ai )P (Ai )

i=1

Extended form of Bayes' rule ― Let {A i, i ∈ [[1, n]]} be a partition of the sample space. We have:

P (B|Ak )P (Ak )
P (Ak |B) =
n

∑ P (B|Ai )P (Ai )

i=1

Independence ― Two events A and B are independent if and only if we have:

P (A ∩ B) = P (A)P (B)

Random Variables

Definitions
Random variable ― A random variable, often noted X, is a function that maps every element in a sample space to a real line.

Cumulative distribution function (CDF) ― The cumulative distribution function F , which is monotonically non-decreasing and is
such that lim F (x) = 0 and lim F (x) = 1, is defined as:
x→−∞ x→+∞

F (x) = P (X ⩽ x)

https://stanford.edu/~shervine/teaching/cme-106/cheatsheet.html 2/6
2/4/2018 CME 106 - Cheatsheet

Remark: we have P (a < X ⩽ B) = F (b) − F (a)

Probability density function (PDF) ― The probability density function f is the probability that X takes on values between two
adjacent realizations of the random variable.

Relationships involving the PDF and CDF


Discrete case ― Here, X takes discrete values, such as outcomes of coin flips. By noting f and F the PDF and CDF respectively, we
have the following relations:

F (x) = ∑ P (X = xi ) and f (xj ) = P (X = xj )

xi ⩽x

On top of that, the PDF is such that:

0 ⩽ f (xj ) ⩽ 1 and ∑ f (xj ) = 1

Continuous case ― Here, X takes continuous values, such as the temperature in the room. By noting f and F the PDF and CDF
respectively, we have the following relations:

x
dF
F (x) = ∫ f (y)dy and f (x) =
dx
−∞

On top of that, the PDF is such that:

+∞

f (x) ⩾ 0 and ∫ f (x)dx = 1


−∞

Expectation and Moments of the Distribution

In the following sections, we are going to keep the same notations as before and the formulas will be explicitly detailed for the
discrete (D) and continuous (C) cases

Expected value ― The expected value of a random variable, also known as the mean value or the first moment, is often noted E[X]
or μ and is the value that we would obtain by averaging the results of the experiment infinitely many times. It is computed as follows:

n +∞

(D) E[X] = ∑ xi f (xi ) and (C) E[X] = ∫ xf (x)dx


−∞
i=1

Generalization of the expected value ― The expected value of a function of a random variable g(X) is computed as follows:

n +∞

(D) E[g(X)] = ∑ g(xi )f (xi ) and (C) E[g(X)] = ∫ g(x)f (x)dx


−∞
i=1

k
th
moment ― The k th
moment, noted E[X ], is the value of X that we expect to observe on average on infinitely many trials. It is
k k

computed as follows:

n +∞

k k k k
(D) E[X ] = ∑ x f (xi ) and (C) E[X ] = ∫ x f (x)dx
i
−∞
i=1

Remark: the k th
moment is a particular case of the previous definition with g : X ↦ X
k

https://stanford.edu/~shervine/teaching/cme-106/cheatsheet.html 3/6
2/4/2018 CME 106 - Cheatsheet

Variance ― The variance of a random variable, often noted Var(X) or σ , is a measure of the spread of its distribution function. It is
2

determined as follows:

2 2 2
Var(X) = E[(X − E[X]) ] = E[X ] − E[X]

Standard deviation ― The standard deviation of a random variable, often noted σ, is a measure of the spread of its distribution
function which is compatible with the units of the actual random variable. It is determined as follows:

σ = √Var(X)

Characteristic function ― A characteristic function ψ(ω) is derived from a probability density function f (x) and is defined as:

n +∞
iωxi iωx
(D) ψ(ω) = ∑ f (xi )e and (C) ψ(ω) = ∫ f (x)e dx

i=1 −∞

Remark: we have e iωx


= cos(ωx) + i sin(ωx)

Revisiting the k th
moment ― The k th
moment can also be computed with the characteristic function as follows:

k
1 ∂ ψ
k
E[X ] = [ ]
k k
i ∂ω
ω=0

Transformation of random variables ― Let the variables X and Y be linked by some function. By noting f X and f the distribution
Y

function of X and Y respectively, we have:

∣ dx ∣
fY (y) = fX (x) ∣ ∣
∣ dy ∣

Leibniz integral rule ― Let g be a function of x and potentially c, and a, b boundaries that may depend on c. We have:

b b
∂ ∂b ∂a ∂g
(∫ g(x)dx) = g(b) − g(a) + ∫ (x)dx
∂c ∂c ∂c ∂c
a a

Discrete Probability Distributions


Chebyshev's inequality ― Let X be a random variable with expected value μ. For k, σ > 0 , we have the following inequality:

1
P (|X − μ| ⩾ kσ) ⩽
2
k

Binomial distribution ― Let X be a random variable following the Binomial distribution with n number of trials and parameters p
and q = 1 − p. We note X ∼ B(n, p) and we have the following identities:

n
x 1−x iω n
P (X = x) = ( )p q and ψ(ω) = (pe + q) and E[X] = np and Var(X) = npq
x

Poisson distribution ― Let X be a random variable following the Poisson distribution with parameter μ. We note X ∼ Po(μ) and we
have the following identities:

x
μ iω
−μ μ(e −1)
P (X = x) = e and ψ(ω) = e and E[X] = μ and Var(X) = μ
x!

Remark: when n → +∞ and p → 0 , the Binomial distribution can be approximated by a Poisson distribution with parameter μ = np

https://stanford.edu/~shervine/teaching/cme-106/cheatsheet.html 4/6
2/4/2018 CME 106 - Cheatsheet

Continuous Probability Distributions


Uniform distribution ― Let X be a random variable following the Uniform distribution over the interval [a, b]. We note X ∼ U (a, b)

and we have the following identities:

iωb iωa 2
1 e − e a + b (b − a)
f (x) = and ψ(ω) = and E[X] = and Var(X) =
b − a (b − a)iω 2 12

Gaussian distribution ― Let X be a random variable following the Gaussian distribution, also known as the Normal distribution, with
mean μ and standard deviation σ. We note X ∼ N (μ, σ) and we have the following identities:

2
x−μ
1 −
1
( ) 1 2 2
2 σ iωμ− ω σ 2
f (x) = e and ψ(ω) = e 2 and E[X] = μ and Var(X) = σ
√2πσ

Jointly Distributed Random Variables


Joint probability density function ― The joint probability density function of two random variables X and Y , that we note f XY , is
defined as follows:

(D) fXY (xi , yj ) = P (X = xi  and Y = yj ) and (C) fXY (x, y)ΔxΔy = P (x ⩽ X ⩽ x + Δx and y ⩽ Y ⩽ y + Δy)

Marginal density ― We define the marginal density for the variable X as follows:

+∞

(D) fX (xi ) = ∑ fXY (xi , yj ) and (C) fX (x) = ∫ fXY (x, y)dy
−∞
j

Cumulative distribution ― We define cumulative distrubution F XY as follows:

x y

′ ′ ′ ′
(D) FXY (x, y) = ∑ ∑ fXY (xi , yj ) and (C) FXY (x, y) = ∫ ∫ fXY (x , y )dx dy

xi ⩽x yj ⩽y −∞ −∞

Independence ― Two random variables X and Y are said to be independent if we have:

fXY (x, y) = fX (x)fY (y)

Moments of joint distributions ― We define the moments of joint distributions of random variables X and Y as follows:

+∞ +∞

p q p q
E[X Y ] = ∫ ∫ x y f (x, y)dydx
−∞ −∞

Distribution of a sum of independent random variables ― Let Y = X1 +. . . +Xn with X 1, . . . , Xn independent. We have:

ψY (ω) = ∏ ψXk (ω)

k=1

Covariance ― We define the covariance of two random variables X and Y , that we note σ 2

XY
or more commonly Cov(X, Y ), as
follows:

2
Cov(X, Y ) ≜ σ = E[(X − μX )(Y − μY )] = E[XY ] − μX μY
XY

https://stanford.edu/~shervine/teaching/cme-106/cheatsheet.html 5/6
2/4/2018 CME 106 - Cheatsheet

Correlation ― By noting σ X, σY the standard deviations of X and Y , we define the correlation between the random variables X and
Y , noted ρ , as follows:
XY

2
σ
XY
ρXY =
σX σY

Remark: we note that for any random variables X, Y , we have ρ XY ∈ [−1, 1]

0 Comments shervine 
1 Login

Sort by Best
 Recommend ⤤ Share

Start the discussion…

LOG IN WITH
OR SIGN UP WITH DISQUS ?

Name

Be the first to comment.

✉ Subscribe d Add Disqus to your siteAdd DisqusAdd 🔒 Privacy

 (https://twitter.com/ShervineA)  (https://linkedin.com/in/shervineamidi)  (https://github.com/shervinea)  (https://scholar.google.com/citations?

user=nMnMTm8AAAAJ) 

https://stanford.edu/~shervine/teaching/cme-106/cheatsheet.html 6/6

You might also like