From: Andrew Lorimer Date: Sun, 11 Aug 2019 04:45:48 +0000 (+1000) Subject: [methods] clean up statistics notes X-Git-Tag: yr12~68 X-Git-Url: https://git.lorimer.id.au/notes.git/diff_plain/d65c435cc2937dad127d8f12242fc327e295fa62 [methods] clean up statistics notes --- diff --git a/methods/statistics.pdf b/methods/statistics.pdf index bccb7d2..0929ba0 100644 Binary files a/methods/statistics.pdf and b/methods/statistics.pdf differ diff --git a/methods/statistics.tex b/methods/statistics.tex index 4c398f0..56cd566 100644 --- a/methods/statistics.tex +++ b/methods/statistics.tex @@ -11,9 +11,10 @@ \usepackage{listings} \usepackage{xcolor} % used only to show the phantomed stuff \definecolor{cas}{HTML}{e6f0fe} +\usepackage{mathtools} \pagestyle{fancy} -\fancyhead[LO,LE]{Unit 3 Methods Statistics} +\fancyhead[LO,LE]{Unit 3 Methods --- Statistics} \fancyhead[CO,CE]{Andrew Lorimer} \setlength\parindent{0pt} @@ -23,34 +24,33 @@ \title{Statistics} \author{} \date{} - \maketitle + %\maketitle \section{Probability} - - \[ \Pr(A \cup B) = \Pr(A) + \Pr(B) - \Pr(A \cap B) \] - \[ \Pr(A \cup B) = 0 \tag{mutually exclusive} \] - - \section{Conditional probability} - - \[ \Pr(A|B) = \frac{\Pr(A \cap B)}{\Pr(B)} \quad \text{where } \Pr(B) \ne 0 \] - \[ \Pr(A) = \Pr(A|B) \cdot \Pr(B) + \Pr(A|B^{\prime}) \cdot \Pr(B^{\prime}) \tag{law of total probability} \] - - \[ \Pr(A \cap B) = \Pr(A|B) \times \Pr(B) \tag{multiplication theorem} \] + \subsection*{Probability theorems} - For independent events: + \begin{align*} + \textbf{Union:} &&\Pr(A \cup B) &= \Pr(A) + \Pr(B) - \Pr(A \cap B) \\ + \textbf{Multiplication theorem:} &&\Pr(A \cap B) &= \Pr(A|B) \times \Pr(B) \\ + \textbf{Conditional:} &&\Pr(A|B) &= \frac{\Pr(A \cap B)}{\Pr(B)} \\ + \textbf{Law of total probability:} &&\Pr(A) &= \Pr(A|B) \cdot \Pr(B) + \Pr(A|B^{\prime}) \cdot \Pr(B^{\prime}) \\ + \end{align*} + + Mutually exclusive \(\implies \Pr(A \cup B) = 0\) \\ - \begin{itemize} - \item \(\Pr(A \cap B) = \Pr(A) \times \Pr(B)\) - \item \(\Pr(A|B) = \Pr(A)\) - \item \(\Pr(B|A) = \Pr(B)\) - \end{itemize} + Independent events: + \begin{flalign*} + \quad \Pr(A \cap B) &= \Pr(A) \times \Pr(B)& \\ + \Pr(A|B) &= \Pr(A) \\ + \Pr(B|A) &= \Pr(B) + \end{flalign*} - \subsection{Discrete random distributions} + \subsection*{Discrete random distributions} Any experiment or activity involving chance will have a probability associated with each result or \textit{outcome}. If the outcomes have a reference to \textbf{discrete numeric values} (outcomes that can be counted), and the result is unknown, then the activity is a \textit{discrete random probability distribution}. - \subsubsection{Discrete probability distributions} + \subsubsection*{Discrete probability distributions} If an activity has outcomes whose probability values are all positive and less than one ($\implies 0 \le p(x) \le 1$), and for which the sum of all outcome probabilities is unity ($\implies \sum p(x) = 1$), then it is called a \textit{probability distribution} or \textit{probability mass} function. @@ -58,13 +58,12 @@ \item \textbf{Probability distribution graph} - a series of points on a cartesian axis representing results of outcomes. $\Pr(X=x)$ is on $y$-axis, $x$ is on $x$ axis. \item \textbf{Mean $\mu$} or \textbf{expected value} \(E(X)\) - measure of central tendency. Also known as \textit{balance point}. Centre of a symmetrical distribution. \begin{align*} - \overline{x} = \mu = E(X) &= \frac{\Sigma(xf)}{\Sigma(f)} \\ - &= \sum_{i=1}^n (x_i \cdot P(X=x_i)) \\ - &= \int_{-\infty}^{\infty} x\cdot f(x) \> dx \quad \text{(for pdf } f \text{)} - &= \sum_{-\infty}^{\infty} + \overline{x} = \mu = E(X) &= \frac{\Sigma \left[ x \cdot f(x) \right]}{\Sigma f} \tag{where \(f =\) absolute frequency} \\ + &= \sum_{i=1}^n \left[ x_i \cdot \Pr(X=x_i) \right] \tag{for \(n\) values of \(x\)}\\ + &= \int_{-\infty}^{\infty} (x\cdot f(x)) \> dx \tag{for pdf \(f\)} \end{align*} \item \textbf{Mode} - most popular value (has highest probability of \(X\) values). Multiple modes can exist if \(>1 \> X\) value have equal-highest probability. Number must exist in distribution. - \item \textbf{Median \(m\)} - the value of \(x\) such that \(\Pr(X \le m) = \Pr(X \ge m) = 0.5\). If \(m > 0.5\), then value of \(X\) that is reached is the median of \(X\). If \(m = 0.5 = 0.5\), then \(m\) is halfway between this value and the next. + \item \textbf{Median \(m\)} - the value of \(x\) such that \(\Pr(X \le m) = \Pr(X \ge m) = 0.5\). If \(m > 0.5\), then value of \(X\) that is reached is the median of \(X\). If \(m = 0.5 = 0.5\), then \(m\) is halfway between this value and the next. To find \(m\), add values of \(X\) from smallest to alrgest until the sum reaches 0.5. \[ m = X \> \text{such that} \> \int_{-\infty}^{m} f(x) dx = 0.5 \] \item \textbf{Variance $\sigma^2$} - measure of spread of data around the mean. Not the same magnitude as the original data. For distribution \(x_1 \mapsto p_1, x_2 \mapsto p_2, \dots, x_n \mapsto p_n\): \begin{align*} @@ -72,17 +71,23 @@ &= \sum (x-\mu)^2 \times \Pr(X=x) \\ &= \sum x^2 \times p(x) - \mu^2 \end{align*} - \item \textbf{Standard deviation $\sigma$} - measure of spread in the original magnitude of the data. Found by taking square root of the variance: $\sigma =\operatorname{sd}(X)=\sqrt{\operatorname{Var}(X)}$ + \item \textbf{Standard deviation $\sigma$} - measure of spread in the original magnitude of the data. Found by taking square root of the variance: + \begin{align*} + \sigma &= \operatorname{sd}(X) \\ + &= \sqrt{\operatorname{Var}(X)} + \end{align*} \end{itemize} - \subsubsection{Expectation theorems} + \subsubsection*{Expectation theorems} + + For some non-linear function \(g\), the expected value \(E(g(X))\) is not equal to \(g(E(X))\). \begin{align*} - E(aX \pm b) &= aE(X) \pm b \\ - E(z) &= z \\ - E(X+Y) &= E(X) + E(Y) \\ - E(X)^n &= \Sigma x^n \cdot p(x) \\ - &\ne [E(X)]^2 + E(X^n) &= \Sigma x^n \cdot p(x) \tag{non-linear function} \\ + &\ne [E(X)]^n \\ + E(aX \pm b) &= aE(X) \pm b \tag{linear function} \\ + E(b) &= b \tag{for constant \(b \in \mathbb{R}\)}\\ + E(X+Y) &= E(X) + E(Y) \tag{for two random variables} \end{align*} @@ -94,6 +99,7 @@ &= \sum_{k=0}^n {n \choose k} x^k y^{n-k} \end{align*} + \subsubsection*{Patterns} \begin{enumerate} \item powers of \(x\) decrease \(n \rightarrow 0\) \item powers of \(y\) increase \(0 \rightarrow n\) @@ -101,14 +107,19 @@ \item Number of terms in \((x+a)^n\) expanded \& simplified is \(n+1\) \end{enumerate} - Combinations: \(^n\text{C}_r = {N\choose k}\) (binomial coefficient) + \subsubsection*{Combinatorics} + + \[ \text{Binomial coefficient:} \quad ^n\text{C}_r = {N\choose k} \] + \begin{itemize} \item Arrangements \({n \choose k} = \frac{n!}{(n-r)}\) \item Combinations \({n \choose k} = \frac{n!}{r!(n-r)!}\) \item Note \({n \choose k} = {n \choose k-1}\) \end{itemize} - \subsubsection{Pascal's Triangle} + \colorbox{cas}{On CAS:} (soft keyboard) \keystroke{\(\downarrow\)} \(\rightarrow\) \keystroke{Advanced} \(\rightarrow\) \verb;nCr(n,cr); + + \subsubsection*{Pascal's Triangle} \begin{tabular}{>{$}l<{$\hspace{12pt}}*{13}{c}} n=\cr0&&&&&&&1&&&&&&\\ @@ -120,50 +131,47 @@ 6&1&&6&&15&&20&&15&&6&&1 \end{tabular} - \colorbox{cas}{On CAS:} (soft keys) \keystroke{\(\downarrow\)} \(\rightarrow\) \keystroke{Advanced} \(\rightarrow\) \verb;nCr(n,cr); - \section{Binomial distributions} (aka Bernoulli distributions) \begin{align*} - \Pr(X=x) &= {n \choose x} p^x (1-p)^{n-x} \\ + \text{Defined by} \quad X &\sim \operatorname{Bi}(n,p) \\ + \implies \Pr(X=x) &= {n \choose x} p^x (1-p)^{n-x} \\ &= {n \choose x} p^x q^{n-x} \end{align*} + where: + \begin{description} + \item \(n\) is the number of trials + \item There are two possible outcomes: \(S\) or \(F\) + \item \(\Pr(\text{success}) = p\) + \item \(\Pr(\text{failure}) = 1-p = q\) + \end{description} + + \subsection*{Conditions for a binomial variable/distribution} \begin{enumerate} \item Two possible outcomes: \textbf{success} or \textbf{failure} \item \(\Pr(\text{success})\) is constant across trials (also denoted \(p\)) \item Finite number \(n\) of independent trials \end{enumerate} - If these conditions are met, then it is a Binomial Random Variable. This variable is said to have a \textit{binomial probability distribution}. - - \begin{itemize} - \item \(n\) is the number of trials - \item There are two possible outcomes: \(S\) or \(F\) - \item \(\Pr(\text{success}) = p\) - \item \(\Pr(\text{failure}) = 1-p = q\) - \item Shorthand notation: \(X \sim \operatorname{Bi}(n,p)\) - \end{itemize} - - \colorbox{cas}{On CAS:} Main \(\rightarrow\) Interactive \(\rightarrow\) Distribution \(\rightarrow\) \verb;binomialPDf; \\ - Input \verb;x; (no. of successes), \verb;numtrial; (no. of trials), \verb;pos; (probbability of success) - - \subsection{Applications of binomial distributions} - - \[ \Pr(X \ge a) = 1 - \Pr(X < a) \] - - \subsection{Expected value of a binomial distribution} + \subsection*{\colorbox{cas}{Solve on CAS}} + + Main \(\rightarrow\) Interactive \(\rightarrow\) Distribution \(\rightarrow\) \verb;binomialPDf; - \[ E(X \sim \operatorname{Bi}(n,p))=np \] + \hspace{2em} Input \verb;x; (no. of successes), \verb;numtrial; (no. of trials), \verb;pos; (probbability of success) - \subsection{Variance} + \subsection*{Properties of \(X \sim \operatorname{Bi}(n,p)\)} - \[ \sigma^2(X) = np(1-p) \] + \begin{align*} + \textbf{Mean} \hspace{-4cm} &&\mu(X) &= np \\ + \textbf{Variance} \hspace{-4cm} &&\sigma^2(X) &= np(1-p) \\ + \textbf{s.d.} \hspace{-4cm} &&\sigma(X) &= \sqrt{np(1-p)} + \end{align*} - \subsection{Standard deviation} + \subsection*{Applications of binomial distributions} - \[ \sigma(X) = \sqrt{np(1-p)} \] + \[ \Pr(X \ge a) = 1 - \Pr(X < a) \] \end{document}