Skip to main content

Continuous Distributions 2

1. Normal Distribution

1.1 Definition

A continuous random variable XX is said to follow the normal distribution with mean μ\mu and variance σ2\sigma^2 if:

fX(x)=12πe(xμ)22σ2f_X(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}

1.2 Significance

Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.

Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.

1.3 Standard Normal distribution

It is the normal distribution with unit mean and unit variance:

SN=N(0,1)\mathcal{S}\mathcal{N}=\mathcal{N}(0,1)

1.4 Linear Transformation of a Normal random variable

1.4.1 Opposite of standard normal random variable

Let XN(0,1)X\sim\mathcal{N}(0,1)

xR,FX(x)=P(X<x)=P(X>x)=x+12πet22dt=x12πeu22dt with u=t=P(X<x)=FX(x)\begin{align*} \forall x\in\mathbb{R},F_{-X}(x)&=\mathcal{P}(-X<x)\\ &=\mathcal{P}(X>-x)\\ &=\int_{-x}^{+\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac {t^2}{2}}\text{dt}\\ &=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-\frac{u^2}{2}}\text{dt} \text{ with }u=-t\\ &=\mathcal{P}(X<x)\\ &=F_X(x) \end{align*}

As a conclusion:

XN(0,1)    XN(0,1)X\sim\mathcal{N}(0,1)\implies -X\sim \mathcal{N}(0,1)

1.4.2 Linear tranformation of a Normal random variable

  • Let aR+,bR,μR,σR+a\in\mathbb{R}^*_+,b\in\mathbb{R},\mu\in\mathbb{R},\sigma\in\mathbb{R}_+^*

  • Let XN(μ,σ),Y=aX+bX\sim \mathcal{N}(\mu,\sigma),Y=aX+b

xR,FY(x)=P(Y<x)=P(aX<xb)=P(X<xba)=FX(xba)    xR,FY(x)=1aFX(xba)=1afX(xba)=12πσae(xbaμ)22σ2=12πσae(xbaμ)22a2σ2    aX+bN(aμ+b,a2σ2)\begin{align*} \forall x\in\mathbb{R},F_Y(x)&=\mathcal{P}(Y< x)\\ &=\mathcal{P}(aX< x-b)\\ &= \mathcal{P}(X<\frac{x-b}{a})\\ &=F_X\left(\frac{x-b}{a}\right)\\ \implies \forall x\in\mathbb{R},F_Y(x)&=\frac{1}{a}F_X'\left(\frac{x-b}{a}\right)\\ &=\frac{1}{a}f_X(\frac{x-b}{a})\\ &=\frac{1}{\sqrt{2\pi}\sigma a}e^{-\frac{\left(\tfrac{x-b}{a}-\mu\right)^2}{2\sigma^2}}\\ &=\frac{1}{\sqrt{2\pi}\sigma a}e^{-\frac{\left(x-b-a\mu\right)^2}{2a^2\sigma^2}} \\ \implies &aX+b\sim\mathcal{N}(a\mu+b,a^2\sigma^2) \end{align*}

In particular:

XN(μ,σ2)    Xμσ=XE[X]V[X]N(0,1)\boxed{X\sim\mathcal{N}(\mu,\sigma^2)\iff \frac{X-\mu}{\sigma}=\frac{X-\mathbb{E}[X]}{\sqrt{\mathbb{V}[X]}}\sim\mathcal{N}(0,1)}
  • For a<0,a<0, we have XμσN(0,1)\frac{X-\mu}{\sigma} \sim \mathcal{N}(0,1), so XμσN(0,1)-\frac{X-\mu}{\sigma}\sim\mathcal{N}(0,1) .

    We have then, X+μN(0,σ2)    XN(μ,σ2).-X+\mu\sim\mathcal{N}(0,\sigma^2)\implies -X\sim\mathcal{N}(-\mu,\sigma^2).

    Which implies the following:

    aX+b=(a)(X)+bN((a)(μ)+b,(a)2σ2)=N(aμ+b,a2σ2)aX+b=(-a)(-X)+b\sim \mathcal{N}((-a)(-\mu)+b,(-a)^2\sigma^2)=\mathcal{N}(a\mu+b,a^2\sigma^2)

As a conclusion:

XN(μ,σ)    ax+bN(aμ+b,a2σ2)\boxed{X\sim \mathcal{N}(\mu,\sigma)\implies ax+b\sim\mathcal{N}\left(a\mu+b,a^2\sigma^2\right)}

1.5 Moments

1.5.1 Moment of a centered Normal distribution

Let XU(0,σ2),X\sim \mathcal{U}(0,\sigma^2), we have:

nN2,E[Xn]=RxnfX(x) dx=R12πσxnex22σ2 dx=R12πσxn1xex22σ2 dx=[((n1)xn2n2πσ)×(σ2ex22σ2)]+R((n1)xn22πσ)×(σ2ex22σ2) dx=(n1)σ2E[Xn2]    nN2,E[Xn]=E[Xnmod2]k=1n2((2k1)σ2)=E[Xnmod2]σ2n2k=1n2(2k1)    nN,E[X2n]=σ2nk=1n(2k1)=σ2nk=1n2k(2k1)k=1n2k=σ2n(2n)!2nn!nN,E[X2n+1]=0because N(0,σ2) is symmetric\begin{align*} \forall n\in\mathbb{N}_{\ge 2},\quad \mathbb{E}[X^n] &=\int_{\mathbb{R}}x^{n}f_X(x)\space \text{dx}\\ &=\int_{\mathbb{R}}\frac{1}{\sqrt{2\pi}\sigma}x^{n}e^{-\frac{x^2}{2\sigma^2}}\space \text{dx}\\ &=\int_{\mathbb{R}}\frac{1}{\sqrt{2\pi}\sigma}x^{n-1}xe^{-\frac{x^2}{2\sigma^2}}\space \text{dx}\\ &=\left[\left(\frac{(n-1)x^{n-2}}{n\sqrt{2\pi}\sigma}\right)\times \left(-\sigma^2e^{-\frac{x^2}{2\sigma^2}}\right)\right]^{+\infty}_{-\infty}-\int_{\mathbb{R}}\left(\frac{(n-1)x^{n-2}}{\sqrt{2\pi}\sigma}\right)\times\left(-\sigma^2e^{-\frac{x^2}{2\sigma^2}}\right) \space \text{dx}\\ &=(n-1)\sigma^2\mathbb{E}[X^{n-2}]\\ \implies \forall n\in\mathbb{N}_{\ge 2},\quad \mathbb{E}[X^n]&=\mathbb{E}[X^{n \bmod 2}]\prod_{k=1}^{\lfloor\frac{n}{2}\rfloor}\big((2k-1)\sigma^2\big) \\ &= \mathbb{E}[X^{n \bmod 2}]\sigma^{2\lfloor\frac{n}{2}\rfloor}\prod_{k=1}^{\lfloor\frac{n}{2}\rfloor}(2k-1)\\ \implies \forall n\in\mathbb{N}^*,\quad \mathbb{E}[X^{2n}]&= \sigma^{2n}\prod_{k=1}^{n}(2k-1)\\ &=\sigma^{2n}\frac{\prod_{k=1}^{n}2k(2k-1)}{\prod_{k=1}^n2k}\\ &=\sigma^{2n}\cdot \frac{(2n)!}{2^nn!}\\ \forall n\in\mathbb{N},\mathbb{E}[X^{2n+1}]&=0 \quad \text{because} \space \mathcal{N}(0,\sigma^2) \space \text{is symmetric} \end{align*}

In particular, the expected value E[X]\mathbb{E}[X] is:

E[X]=0\boxed{\mathbb{E}[X]=0}

Also, the variance:

V[X]=σ2\boxed{\mathbb{V}[X]=\sigma^2}

1.5.2 Central Moments

Let XN(μ,σ2)X\sim \mathcal{N}(\mu,\sigma^2)

As E[(XE[X])]N(0,σ2)\mathbb{E}\left[\left(X-\mathbb{E}[X]\right)\right] \sim \mathcal{N}(0,\sigma^2)

nN,{E[(XE[X])2n]=(2n)!2nn!σ2nE[(XE[X])2n+1]=0\forall n\in\mathbb{N}, \begin{cases} \mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^{2n}\right]&= \frac{(2n)!}{2^nn!}\sigma^{2n}\\ \mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^{2n+1}\right]&=0 \end{cases}

1.5.3 Non-central moments

Let XN(μ,σ2)X\sim \mathcal{N}(\mu,\sigma^2)

nN,E[X2n]=E[(XE[X]+E[X])2n]=k=02n(2nk)E[X]2nkE[(XE[X])k]=k=0n(2n2k)E[X]2n2kE[(XE[X])2k]=k=0n(2n2k)(2k)!2kk!μ2n2kσ2knN,E[X2n+1]=E[(XE[X]+E[X])2n+1]=k=02n+1(2n+1k)E[X]2n+1kE[(XE[X])k]=k=0n(2n+12k)E[X]2n+12kE[(XE[X])2k]=k=0n(2n+12k)(2k)!2kk!μ2n+12kσ2k\begin{align*} \forall n\in\mathbb{N},\quad \mathbb{E}[X^{2n}]&=\mathbb{E}\left[\left(X-\mathbb{E}[X]+\mathbb{E}[X]\right)^{2n}\right] \\ &=\sum_{k=0}^{2n}{2n \choose k}\mathbb{E}[X]^{2n-k}\mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^{k}\right] \\ &=\sum_{k=0}^{n}{2n \choose 2k}\mathbb{E}[X]^{2n-2k}\mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^{2k}\right]\\ &=\sum_{k=0}^{n}{2n \choose 2k}\frac{(2k)!}{2^kk!}\mu^{2n-2k}\sigma^{2k} \\ \forall n\in\mathbb{N},\quad \mathbb{E}[X^{2n+1}]&=\mathbb{E}\left[\left(X-\mathbb{E}[X]+\mathbb{E}[X]\right)^{2n+1}\right] \\ &=\sum_{k=0}^{2n+1}{2n+1 \choose k}\mathbb{E}[X]^{2n+1-k}\mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^{k}\right] \\ &=\sum_{k=0}^{n}{2n+1 \choose 2k}\mathbb{E}[X]^{2n+1-2k}\mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^{2k}\right]\\ &=\sum_{k=0}^{n}{2n+1 \choose 2k}\frac{(2k)!}{2^kk!}\mu^{2n+1-2k}\sigma^{2k} \end{align*}

1.6 Sum of independent normal variables

1.6.1 Case of two centered normal variables

  • Let X1N(0,σ12),X2N(0,σ22)X_1\sim \mathcal{N}(0,\sigma_1^2),X_2\sim\mathcal{N}(0,\sigma_2^2) two independent centered normal variables
  • Let Y=X1+X2Y=X_1+X_2
xR,fY(x)=RfX1(t)fX2(xt)dt=12πσ1σ2Re(t22σ12+(xt)22σ22)dt=12πσ1σ2Rexp(12(t2σ12+x22xt+t2σ22))dt=ex22σ222πσ1σ2Rexp(12((1σ12+1σ22)t22xσ22t))dt=ex22σ222πσ1σ2Rexp(12(t2σ22xσ22t))dtwith σ2=11σ12+1σ22=ex22σ222πσ1σ2Rexp(12σ2(t22xσ2σ22t))dt=ex22σ222πσ1σ2Rexp(12σ2((t2xσ2σ22)2x2σ4σ24))dt=ex22σ22+x2σ2σ242πσ1σ2Re(t2xσ2σ22)22σ2dt=2πσex22σ22(σ2σ221)2πσ1σ2=ex22σ22(1σ22(1σ12+1σ22)1)2πσ1σ2σ=ex22σ22(1σ22σ12+11)2πσ12σ22σ2=ex22σ22σ22σ12(1σ22σ12+1)2πσ12σ22(1σ12+1σ22)=ex221σ12+σ222π(σ12+σ22)=ex22(σ12+σ22)2πσ12+σ22\begin{align*} \forall x\in\mathbb{R},f_Y(x)&=\int_{\mathbb{R}}f_{X_1}(t)f_{X_2}(x-t)\text{dt}\\ &=\frac{1}{2\pi\sigma_1\sigma_2}\int_{\mathbb{R}}e^{-\left(\frac{t^2}{2\sigma_1^2}+\frac{(x-t)^2}{2\sigma_2^2}\right)}\text{dt}\\ &=\frac{1}{2\pi\sigma_1\sigma_2}\int_{\mathbb{R}}\exp\left(-\frac{1}{2}\left(\frac{t^2}{\sigma_1^2}+\frac{x^2-2xt+t^2}{\sigma_2^2}\right)\right)\text{dt}\\ &=\frac{e^{-\frac{x^2}{2\sigma_2^2}}}{2\pi\sigma_1\sigma_2}\int_{\mathbb{R}}\exp\left(-\frac{1}{2}\left(\left(\frac{1}{\sigma^2_1}+\frac{1}{\sigma_2^2}\right)t^2-2\frac{x}{\sigma_2^2}t\right)\right)\text{dt}\\ &=\frac{e^{-\frac{x^2}{2\sigma_2^2}}}{2\pi\sigma_1\sigma_2}\int_{\mathbb{R}}\exp\left(-\frac{1}{2}\left(\frac{t^2}{\sigma_*^2}-2\frac{x}{\sigma_2^2}t\right)\right)\text{dt}\text{with }\sigma_*^2=\frac{1}{\tfrac{1}{\sigma_1^2}+\tfrac{1}{\sigma_2^2}}\\ &=\frac{e^{-\frac{x^2}{2\sigma_2^2}}}{2\pi\sigma_1\sigma_2}\int_{\mathbb{R}}\exp\left(-\frac{1}{2\sigma_*^2}\left(t^2-2\frac{x\sigma_*^2}{\sigma_2^2}t\right)\right)\text{dt}\\ &=\frac{e^{-\frac{x^2}{2\sigma_2^2}}}{2\pi\sigma_1\sigma_2}\int_{\mathbb{R}}\exp\left(-\frac{1}{2\sigma_*^2}\left(\left(t^2-\frac{x\sigma_*^2}{\sigma_2^2}\right)^2-\frac{x^2\sigma_*^4}{\sigma_2^4}\right)\right)\text{dt} \\ &=\frac{e^{-\frac{x^2}{2\sigma_2^2}+\frac{x^2\sigma_*^2}{\sigma_2^4}}}{2\pi\sigma_1\sigma_2}\int_{\mathbb{R}}e^{-\frac{\left(t^2-\frac{x\sigma_*^2}{\sigma_2^2}\right)^2}{2\sigma_*^2}}\text{dt}\\ &=\frac{\sqrt{2\pi}\sigma_*e^{-\frac{x^2}{2\sigma_2^2}\left(\frac{\sigma_*^2}{\sigma^2_2}-1\right)}}{2\pi\sigma_1\sigma_2}\\ &=\frac{e^{\frac{x^2}{2\sigma_2^2}\left(\frac{1}{\sigma^2_2(\tfrac{1}{\sigma_1^2}+\tfrac{1}{\sigma_2^2})}-1\right)}}{\sqrt{2\pi}\tfrac{\sigma_1\sigma_2}{\sigma_*}}\\ &=\frac{e^{\frac{x^2}{2\sigma_2^2}\left(\frac{1}{\tfrac{\sigma^2_2}{\sigma_1^2}+1}-1\right)}}{\sqrt{2\pi\tfrac{\sigma_1^2\sigma_2^2}{\sigma_*^2}}}\\ &=\frac{e^{-\frac{x^2}{2\sigma_2^2}\cdot\frac{\sigma^2_2}{\sigma_1^2}\left(\frac{1}{\tfrac{\sigma^2_2}{\sigma_1^2}+1}\right)}}{\sqrt{2\pi\sigma_1^2\sigma_2^2\left(\tfrac{1}{\sigma_1^2}+\tfrac{1}{\sigma_2^2}\right)}}\\ &=\frac{e^{-\frac{x^2}{2}\cdot\frac{1}{\sigma_1^2+\sigma_2^2}}}{\sqrt{2\pi\left(\sigma_1^2+\sigma_2^2\right)}}\\ &=\frac{e^{-\frac{x^2}{2(\sigma_1^2+\sigma_2^2)}}}{\sqrt{2\pi}\cdot\sqrt{\sigma_1^2+\sigma_2^2}} \end{align*}

Conclusion:

YN(0,σ12+σ22)\boxed{Y\sim\mathcal{N}\left(0,\sigma_1^2+\sigma_2^2\right)}

1.6.2 Case of two independent normal variables

  • Let X1N(μ1,σ12),X2N(μ2,σ22)X_1\sim \mathcal{N}(\mu_1,\sigma_1^2),X_2\sim\mathcal{N}(\mu_2,\sigma_2^2) two independent normal variables
  • Let Y=X1+X2Y=X_1+X_2

We have:

{X1μ1N(0,σ12)X2μ2N(0,σ22)    (X1μ1)+(X2μ2)N(0,σ12+σ22)\begin{cases} X_1-\mu_1 \sim\mathcal{N}(0,\sigma_1^2)\\ X_2-\mu_2 \sim\mathcal{N}(0,\sigma_2^2) \end{cases} \implies (X_1-\mu_1)+(X_2-\mu_2)\sim\mathcal{N}\left(0,\sigma_1^2+\sigma_2^2\right)

So we can conclude that:

Y=X1+X2N(μ1+μ2,σ12+σ22)\boxed{Y=X_1+X_2\sim\mathcal{N}\left(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2\right)}

1.6.3 General Case

  • Let nNn\in\mathbb{N}^*
  • Let X1N(μ1,σ12),,XnN(μn,σn2)X_1\sim\mathcal{N}(\mu_1,\sigma_1^2),\dots,X_n\sim\mathcal{N}(\mu_n,\sigma_n^2) be nn independent random variables

It can be trivially concluded from 1.5.21.5.2 that:

i=1nXiN(i=1nμi,i=1nσi2)\boxed{\sum_{i=1}^nX_i\sim\mathcal{N}\left(\sum_{i=1}^n\mu_i,\sum_{i=1}^n\sigma_i^2\right)}

2. Γ\Gamma distributions

2.1 Definition

  1. Let α,βR+\alpha,\beta\in\mathbb{R}_+^*

  2. Let XX a continuous random variable

By definition, XX is said to follow the gamma distribution of parameters (α,β)(\alpha,\beta) if:

fX(x)=xα1βαeβxΓ(α)f_X(x)=\frac{x^{\alpha-1}\beta^\alpha e^{-\beta x}}{\Gamma(\alpha)}

We denote it by:

XΓ(α,β)X\sim \Gamma(\alpha,\beta)

2.2 Significance

The gamma distribution has been used to model the size of insurance claims and rainfalls. This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by a gamma process – much like the exponential distribution generates a Poisson process.

The gamma distribution is also used to model errors in multi-level Poisson regression models, because a mixture of Poisson distributions with gamma distributed rates has a known closed form distribution, called negative binomial.

In wireless communication, the gamma distribution is used to model the multi-path fading of signal power.

2.3 Exponential Distribution as a Gamma Distribution

We have:

E(λ)=Γ(1,λ)\mathcal{E}(\lambda)=\Gamma(1,\lambda)

2.4 Moments

2.4.1 Non-Central moments

Let XΓ(α,β)X\sim \Gamma(\alpha,\beta)

nN,E[Xn]=R+xnfX(x) dx=R+xα+n1βαeβxΓ(α) dx=Γ(α+n)Γ(α)βnR+xα+n1βα+neβxΓ(α+n) dx=Γ(α+n)Γ(α)βn=βni=0n1α+i\begin{align*} \forall n\in\mathbb{N},\quad \mathbb{E}[X^n]&=\int_{\mathbb{R}_+}x^nf_X(x) \space \text{dx}\\ &=\int_{\mathbb{R}_+}\frac{x^{\alpha+n-1}\beta^\alpha e^{-\beta x}}{\Gamma(\alpha)} \space \text{dx}\\ &=\frac{\Gamma(\alpha+n)}{\Gamma(\alpha)\beta^n}\int_{\mathbb{R}_+}\frac{x^{\alpha+n-1}\beta^{\alpha+n} e^{-\beta x}}{\Gamma(\alpha+n)} \space \text{dx}\\ &=\frac{\Gamma(\alpha+n)}{\Gamma(\alpha)\beta^n}\\ &=\beta^{-n}\prod_{i=0}^{n-1}\alpha+i \end{align*}

In particular, The expected value E[X]\mathbb{E}[X] is:

E[X]=αβ\boxed{\mathbb{E}[X]=\frac{\alpha}{\beta}}

2.4.2 Central Moments

nN,E[(XE[X])n]=k=0n(nk)(1)nkE[Xk]E[X]nk=k=0n(nk)(1)nkαnkΓ(α+k)βnΓ(α)\begin{align*} \forall n\in\mathbb{N},\quad \mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^n\right]&= \sum_{k=0}^n{n \choose k}(-1)^{n-k}\mathbb{E}[X^k]\mathbb{E}[X]^{n-k}\\ &=\sum_{k=0}^n {n \choose k}(-1)^{n-k}\frac{\alpha^{n-k}\Gamma(\alpha+k)}{\beta^n\Gamma(\alpha)} \end{align*}

In particular, the variance V[X]\mathbb{V}[X] is:

V[X]=α2Γ(α)2αΓ(α+1)+Γ(α+2)β2Γ(α)=α22α2+α(α+1)β2=αβ2\boxed{\mathbb{V}[X]=\frac{\alpha^2\Gamma(\alpha)-2\alpha\Gamma(\alpha+1)+\Gamma(\alpha+2)}{\beta^2 \Gamma(\alpha)}=\frac{\alpha^2-2\alpha^2+\alpha(\alpha+1)}{\beta^2}=\frac{\alpha}{\beta^2}}

2.5 Sum of gamma distributions

2.5.1 Two gamma distributions

  1. Let α1,α2,βR+\alpha_1,\alpha_2,\beta\in\mathbb{R}_+^*
  2. Let XΓ(α1,β),YΓ(α2,β),X\sim \Gamma(\alpha_1,\beta), Y\sim\Gamma(\alpha_2,\beta), two independent random variables and let Z=X+YZ=X+Y.
xR+,fZ(x)=RfX(t)fY(xt)dt=0xfX(t)fY(xt)dt=0xtα11βα1eβtΓ(α1)(xt)α21βα2eβ(xt)Γ(α2)dt=βα1+α2eβxΓ(α1)Γ(α2)0xtα11(xt)α21dt=βα1+α2eβxΓ(α1)Γ(α2)01(xu)α11(x(1u))α21x du with t=xu=βα1+α2xα1+α21eβxΓ(α1)Γ(α2)01uα11(1u)α21du=βα1+α2B(α1,α2)Γ(α1)Γ(α2)xα1+α21eβx=βα1+α2xα1+α21eβxΓ(α1+α2) because B(α1,α2)=Γ(α1)Γ(α2)Γ(α1+α2)xR,fZ(x)=0\begin{align*} \forall x\in\mathbb{R}_+^*,f_Z(x)&=\int_{\mathbb{R}}f_X(t)f_Y(x-t)\text{dt}\\ &=\int_0^xf_X(t)f_Y(x-t)\text{dt}\\ &=\int_0^x\frac{t^{\alpha_1-1}\beta^{\alpha_1} e^{-\beta t}}{\Gamma(\alpha_1)}\cdot\frac{(x-t)^{\alpha_2-1}\beta^{\alpha_2} e^{-\beta (x-t)}}{\Gamma(\alpha_2)}\text{dt}\\ &=\frac{\beta^{\alpha_1+\alpha_2}e^{-\beta x}}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\int_0^xt^{\alpha_1-1}(x-t)^{\alpha_2-1}\text{dt}\\ &=\frac{\beta^{\alpha_1+\alpha_2}e^{-\beta x}}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\int_0^1(xu)^{\alpha_1-1}\left(x(1-u)\right)^{\alpha_2-1}x\space\text{du}\space \text{with }t=xu\\ &=\frac{\beta^{\alpha_1+\alpha_2}x^{\alpha_1+\alpha_2-1}e^{-\beta x}}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\int_0^1u^{\alpha_1-1}\left(1-u\right)^{\alpha_2-1}\text{du}\\ &= \beta^{\alpha_1+\alpha_2}\frac{\Beta(\alpha_1,\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}x^{\alpha_1+\alpha_2-1}e^{-\beta x}\\ &=\frac{\beta^{\alpha_1+\alpha_2}x^{\alpha_1+\alpha_2-1}e^{-\beta x}}{\Gamma(\alpha_1+\alpha_2)}\text{ because }\Beta(\alpha_1,\alpha_2)=\frac{\Gamma(\alpha_1)\Gamma(\alpha_2)}{\Gamma(\alpha_1+\alpha_2)}\\ \forall x\in\mathbb{R}_-,f_Z(x)&=0 \end{align*}

So we can conclude that:

Z=X+YΓ(α1+α2,β)\boxed{Z=X+Y\sim \Gamma(\alpha_1+\alpha_2,\beta)}

2.5.2 General Case

  • Let nNn\in\mathbb{N}^*
  • Let X1Γ(α1,β),,XnΓ(αn,β)X_1\sim\Gamma(\alpha_1,\beta),\dots,X_n\sim\Gamma(\alpha_n,\beta) be nn independents gamma distributions that have the same β\beta parameter

It can be proved by induction that:

i=1nXiΓ(i=1nαi,β)\boxed{\sum_{i=1}^nX_i\sim\Gamma\left(\sum_{i=1}^n\alpha_i,\beta\right)}

2.6 Sum of Exponential distributions

  • Let nN,λR+n\in\mathbb{N}^*,\lambda\in\mathbb{R}_+^*
  • Let X1,,XnE(λ)X_1,\dots,X_n\sim\mathcal{E}(\lambda) be nn independent exponential random variables having the same parameter λ\lambda

This is a special case of 2.4:2.4:

i=1nXiΓ(n,λ)\boxed{\sum_{i=1}^nX_i\sim\Gamma\left(n,\lambda\right)}

2.7 Scaling of Gamma distributions

  • Let kR+k\in\mathbb{R}_+^*
  • Let XΓ(α,β)X\sim \Gamma(\alpha,\beta) and Y=kXY=kX

We have:

xR+,fY(x)=1kf(xk)=1kβα(xk)α1eβkxΓ(α)=(βk)αxα1eβkxΓ(α)\begin{align*} \forall x\in\mathbb{R}_+^*, \quad f_Y(x)&=\frac{1}{k}f\left(\frac{x}{k}\right)\\ &=\frac{1}{k}\cdot \frac{\beta^\alpha\left(\frac{x}{k}\right)^{\alpha-1}e^{\frac{-\beta}{k}x}}{\Gamma(\alpha)}\\ &=\frac{\left(\frac{\beta}{k}\right)^{\alpha}x^{\alpha-1}e^{\frac{-\beta}{k}x}}{\Gamma(\alpha)} \end{align*}

So we have YΓ(α,βk)Y\sim \Gamma(\alpha,\frac{\beta}{k}):

kR+,XΓ(α,β)    kXΓ(α,βk)\boxed{\forall k\in\mathbb{R}_+^*,\quad X\sim\Gamma(\alpha,\beta)\iff kX\sim \Gamma\left(\alpha,\frac{\beta}{k}\right)}