distribution_in_statistic

L 7 - Distribution 1

Bernoulli Distribution

Definition:

A random variable X is said to have a Bernoulli distribution with parameter p, where 0 ≤ p ≤ 1, if its probability mass function is given by

  • pmf.: fX(x)=px(1p)(1x)f_{X}(x) = p^x(1 − p)^{(1−x)} for x = 0, 1
  • mean & expectation: μ=E[X]=p\mu = E[X] = p
  • variance: σ2=Var[X]=p(1p)\sigma^2 = Var[X] = p(1-p)
  • mgf: MX(t)=E[etX]=etp+(1p),t(,)M_{X}(t) = E[e^{tX}] = e^{t}p + (1 - p), t \in (-\infty ,\infty)

n Bernoulli trials

Definition: a Bernoulli experiment performed n times:

  • X1,X2,,XnX_{1}, X_{2}, \dots, X_{n} are independent Bernoulli random variables (all trials are independent)
  • with same parameter p

Binomial Distribution

Definition:

A random variable X is said to have a binomial distribution if its probability mass function is given by

  • pmf: fX(x)=(nx)px(1p)(nx)f_{X}(x) = \binom{n}{x}p^x(1 − p)^(n−x)
  • mean & expectation: μ=E[X]=np\mu = E[X] = np
  • variance: σ2=Var[X]=np(1p)\sigma^2 = Var[X] = np(1-p)
  • mgf: MX(t)=(pet+1p)nM_{X}(t) = (pe^t + 1 - p)^n

deviation

if random variables X1,X2,,XnX_{1}, X_{2}, \dots, X_{n} are independent, then

E[X1+X2++Xn]=E[X1]+E[X2]++E[Xn]E[X_{1} + X_{2} + \dots + X_{n}] = E[X_{1}] + E[X_{2}] + \dots + E[X_{n}]

Var[X1+X2++Xn]=Var[X1]+Var[X2]++Var[Xn]Var[X_{1} + X_{2} + \dots + X_{n}] = Var[X_{1}] + Var[X_{2}] + \dots + Var[X_{n}]

M(t)=n[(1p)+pet]n1petM(0)=E[X]=np M'(t) = n\left[(1-p) + p e^t\right]^{n-1} p e^t \Rightarrow M'(0) = E[X] = np

M(t)=n(n1)[(1p)+pet]n2p2e2t+n[(1p)+pet]n1pet M''(t) = n(n-1)\left[(1-p) + p e^t\right]^{n-2} p^2 e^{2t} + n\left[(1-p) + p e^t\right]^{n-1} p e^t

M(0)=E[X2]=n(n1)p2+np M''(0) = E[X^2] = n(n-1)p^2 + np

Var[X]=E[X2](E[X])2=n2p2np2+npn2p2=np(1p) \operatorname{Var}[X] = E[X^2] - (E[X])^2 = n^2 p^2 - n p^2 + n p - n^2 p^2 = np(1-p)

Hypergeometric Distribution

Definition:

A random variable X is said to have a hypergeometric distribution if its probability mass function is given by

  • pmf: fX(x)=(Kx)(NKnx)(Nn)f_{X}(x) = \frac{\binom{K}{x}\binom{N-K}{n-x}}{\binom{N}{n}}
  • mean & expectation: μ=E[X]=nKN\mu = E[X] = \frac{nK}{N}
  • variance: σ2=Var[X]=nKNNKNNnN1\sigma^2 = Var[X] = n\frac{K}{N}\frac{N-K}{N}\frac{N-n}{N-1}
  • mgf: MX(t)=(KNet+1KN)nM_{X}(t) = \left(\frac{K}{N}e^t + 1 - \frac{K}{N}\right)^n

L 8 - Distribution 2

Geometric Distribution

Definition:

A random variable X is said to have a geometric distribution if its probability mass function is given by

  • pmf: fX(x)=p(1p)x1f_{X}(x) = p(1-p)^{x-1}
  • mean & expectation: μ=E[X]=1p\mu = E[X] = \frac{1}{p}
  • variance: σ2=Var[X]=1pp2\sigma^2 = Var[X] = \frac{1-p}{p^2}
  • mgf: MX(t)=pet1(1p)etM_{X}(t) = \frac{pe^t}{1-(1-p)e^t}

Negative Binomial Distribution

Definition:
A random variable X is said to have a negative binomial distribution if its probability mass function is given by

  • pmf: fX(x)=(x1r1)pr(1p)xrf_{X}(x) = \binom{x-1}{r-1}p^r(1-p)^{x-r}
  • mean & expectation: μ=E[X]=rp\mu = E[X] = \frac{r}{p}
  • variance: σ2=Var[X]=r(1p)p2\sigma^2 = Var[X] = \frac{r(1-p)}{p^2}
  • mgf: MX(t)=(p1(1p)et)rM_{X}(t) = \left(\frac{p}{1-(1-p)e^t}\right)^r
  • negative binomial distribution is a generalization of the geometric distribution

Poisson Distribution

Definition:
A random variable X is said to have a Poisson distribution if its probability mass function is given by

  • pmf: fX(x)=eλλxx!f_{X}(x) = \frac{e^{-\lambda}\lambda^x}{x!}
  • mean & expectation: μ=E[X]=λ\mu = E[X] = \lambda
  • variance: σ2=Var[X]=λ\sigma^2 = Var[X] = \lambda
  • mgf: MX(t)=eλ(et1)M_{X}(t) = e^{\lambda(e^t-1)}
  • Poisson distribution is a limiting case of the binomial distribution when n is large and p is small

L 9 & 10 - Continuous Random Variable 2

第九讲引入了连续随机变量,接着介绍了连续随机变量的分布

Uniform Distribution

Definition:

A random variable X is said to have a uniform distribution if its probability density function is given by

  • pdf: fX(x)=1baf_{X}(x) = \frac{1}{b-a}
  • mean & expectation: μ=E[X]=a+b2\mu = E[X] = \frac{a+b}{2}
  • variance: σ2=Var[X]=(ba)212\sigma^2 = Var[X] = \frac{(b-a)^2}{12}
  • mgf: MX(t)=etbetat(ba)M_{X}(t) = \frac{e^{tb}-e^{ta}}{t(b-a)}
  • The uniform distribution is often used to model situations where all outcomes are equally likely

Exponential Distribution

Definition:

A random variable X is said to have an exponential distribution if its probability density function is given by

  • pdf: fX(x)=λeλxf_{X}(x) = \lambda e^{-\lambda x}
  • mean & expectation: μ=E[X]=1λ\mu = E[X] = \frac{1}{\lambda}
  • variance: σ2=Var[X]=1λ2\sigma^2 = Var[X] = \frac{1}{\lambda^2}
  • mgf: MX(t)=λλtM_{X}(t) = \frac{\lambda}{\lambda - t}

Normal Distribution

Definition:

A random variable X is said to have a normal distribution if its probability density function is given by

  • pdf: fX(x)=12πσe(xμ)22σ2f_{X}(x) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
  • mean & expectation: μ=E[X]\mu = E[X]
  • variance: σ2=Var[X]\sigma^2 = Var[X]
  • mgf: MX(t)=eμt+12σ2t2M_{X}(t) = e^{\mu t + \frac{1}{2}\sigma^2 t^2}
  • The normal distribution is the most important continuous distribution in probability and statistics