Download PDF

Distance metrics and outliers

Association rules

Data cleaning

Summary statistics and visualisation for one variable

Summary statistics and visualisation for multiple variables

Testing population means with Z-tests and T-tests

Privacy

Pivotal quantities

Jackknifing

Bootstrapping

Non-parametric estimation of probability distributions

Bayesian parameter estimation

Point estimates of probability distributions

Likelihood functions

Maximum Likelihood Estimation (MLE)

Maximum A-Priori (MAP) estimation

The Method Of Moments (MOM)

The Generalised Method of Moments (GMM)

M-estimators

Estimating population moments

Testing generative parameter estimates with Z-tests and T-tests

Choosing parametric probability distributions

Latent variable models

Dimensionality reduction with Principal Component Analysis (PCA)

K-means and k-mediods clustering

Bayesian parameter estimation of discriminative models

Point variable estimates for discriminative models

Using F-tests to compare regression models

Test sets and validation sets

Ordinary Least Squares for prediction

Regularising linear regression for prediction

Choosing linear models for prediction

Generalised linear models, the delta rule and binary classification

Generalised linear models and multiclass classification

Classification trees

Regression Trees

Bayesian trees

Support Vector Machines (SVM)

Variational Bayes

The Naive Bayes classifier

The K-Nearest Neighbours (KNN) classifier

Discriminant analysis

Non-parametric regression

Ensemble methods

Ensemble methods for trees

Regularising black box models

Confidence intervals of black box models

Interpreting black box models

Semi-supervised learning

Imputing missing data for prediction

Recommenders

Multi-layer perceptrons and backpropagation

Additional link functions for neural networks

Alternatives to backpropagation

Pre-training neural networks

Multi-class neural networks

Probabilistic neural networks

Neural networks and regression

Regularising neural networks

Convolutional layers for neural networks

Autoencoders and Variational Autoencoders (VAE)

Restricted Boltzmann Machines (RBMs)

Self-organising maps

Generative neural networks

Classifying written characters

Classifying images

Facial recognition

Computer vision

Ordinary Least Squares for inference

Testing regression parameter estimates with Z-tests and T-tests

Multiple hypothesis testing

Generalised Least Squares

General Linear Models

Analysis of variance (ANOVA)

Instrumental Variables

Imputing missing data for inference

Measurement error and inference

Semi-parametric regression

Homogeneous treatment effects

Heterogeneous treatment effects

Causal trees

Estimating Markov chains

Estimating Hidden Markov Models (HMMs)

Univariate forecasting

Multivariate forecasting

Inference with time series

Natural Language Processing (NLP)

Recurrent neural networks

Recurrent Neural Network (RNN) encoders and decoders

Survival analysis

Text prediction

Text translation

Voice recognition

Song recognition

Audio-to-text / text-to-audio

Weather forecasting

Hashes

Merkel trees

Classical encryption

Modern symmetric encryption

Modern asymmetric encryption

Signal processing

The output for all nodes in a layer is \(0\) unless it is the greatest.

Single node, spits out the max of all inputs.