Technology and Art
This article aims to start the road towards a theoretical intuition behind Gaussian Processes, another Machine Learning technique based on Bayes’ Rule. However, there is a raft of material that I needed to understand and relearn before fully appreciating some of the underpinnings of this technique.
I’d like to do some high level dives into some of the topics I believe will help practictioners go a little deeper than “it’s just a Gaussian Process of many variables”.
The map below shows the rough order in which the preliminary material will be presented.
graph TD;
quad[Quadratic Form of Matrix]-->chol[Cholesky Factorisation];
tri[Triangular Matrices]-->chol[Cholesky Factorisation];
det[Determinants]-->chol[Cholesky Factorisation];
jac[Jacobian]-->jaclin[Jacobian of Linear Transformations]
cov[Covariance Matrix]-->mvn[Multivariate Gaussian]
chol[Cholesky Factorisation]-->mvn[Multivariate Gaussian]
mvn[Multivariate Gaussian]-->mvnlin[MVN as Linearly Transformed Sums of Uncorrelated Random Variables]
crv[Change of Random Variable]-->mvnlin[MVN as Linearly Transformed Sums of Uncorrelated Random Variables]
jaclin[Jacobian of Linear Transformations]-->mvnlin[MVN as Linearly Transformed Sums of Uncorrelated Random Variables]
diffeq[Difference Equations]-->diffmat[Difference Matrix]-->gp[Gaussian Processes]
mvnlin[MVN as Linearly Transformed Sums of Uncorrelated Random Variables]-->Conditioning
mvnlin[MVN as Linearly Transformed Sums of Uncorrelated Random Variables]-->Marginalisation
Conditioning-->gp[Gaussian Processes]
Marginalisation-->gp[Gaussian Processes]
style chol fill:#006f00,stroke:#000,stroke-width:2px,color:#fff
style mvn fill:#006fff,stroke:#000,stroke-width:2px,color:#fff
style gp fill:#8f0f00,stroke:#000,stroke-width:2px,color:#fff
Let’s survey the topics and their relevance quickly in this article.