Total Internal Reflection

Technology and Art


Code

Cobol REKT
Plenoxels
Transformer
Basis-Processing
Cataract
COMRADE
Duck-Angular
Exo
IRIS
MuchHeap
Snail-MapReduce
Underline
Lambda-Queuer
jQuery-Jenkins Radiator

Contact

Github
Twitter
LinkedIn

Site Feed

Reading List

Latest Posts

An Ode to the Generalist

13 March 2023

This post is probably a spiritual successor to Resilient Knowledge Bases.

Tags: Software Engineering

Economic Factors in Software Architectural Decisions

20 February 2023

This article continues from where Every Software Engineer is an Accountant left off. I have had feedback that I need to make my posts on these a little more explainable; I will attempt to do that here.

Tags: Software Engineering, Software Engineering Economics

Advice I'd give a younger me

13 February 2023

This is a weird mix of advice I’d give the less-experienced me, as well as reflections of my personal value system. This verbal diarrhoea came out all at once in a single sitting of 45 minutes. I apologise for some of the strong language in here, but I thought I’d share it without much censoring.

Tags: Software Engineering, Value System

Every Software Engineer is an Accountant

4 February 2023

This article continues from where Every Software Engineer is an Economist left off, and delves slightly deeper into some of the topics already introduced there, as well as several new ones. In the spirit of continuing the theme of “Every Software Engineer is an X”, we’ve chosen accounting as the next profession.

Tags: Software Engineering, Software Engineering Economics

Every Software Engineer is an Economist

22 January 2023

Background: This post took me a while to write: much of this is motivated by problems that I’ve noticed teams facing day-to-day at work. To be clear, this post does not offer a solution; only some thoughts, and maybe a path forward in aligning developers’ and architects’ thinking more closely with the frameworks used by people controlling the purse-strings of software development projects.

Tags: Software Engineering, Software Engineering Economics

Transformers using PyTorch : Worklog Part 2

14 January 2023

We continue looking at the Transformer architecture from where we left from Part 1. When we’d stopped, we’d set up the Encoder stack, but had stopped short of adding positional encoding, and starting work on the Decoder stack. In this post, we will focus on setting up the training cycle.

Tags: Machine Learning, PyTorch, Programming, Deep Learning, Transformers

A Tale of Unintentional Learning

11 January 2023

TL;DR: I went to using Vim once in a year to using it everyday by accident when I got into a flow mindset after the effort of understanding a Machine Learning paper. It feels like a miracle.

Tags: Learning, Vim

Tests increase our Knowledge of the System: A Proof from Probability

10 January 2023

Note: This is a post from July 13, 2011, rescued from my old blog. This is only for archival purposes, and is reproduced verbatim, but I make no claims about its rigour, though it does still seem plausible.

Tags: Proof, Tests, Software Engineering, Probability

A Pipeline for Adaptive Bitrate Video Encoding

9 January 2023

Note: This is a post from July 13, 2011, rescued from my old blog. This is only for archival purposes, and is reproduced verbatim, but is hopelessly outdated.

Tags: Video Processing, Archive, Software Engineering

Vim and TMux Commands Galore

5 January 2023

This short post lists the Neovim (Vim) shortcuts I am getting used to. I’ve recently switched to trying the Vim mode for my IDE needs, and having used Vim previously only for very simple tasks, am having a blast practising the basic Vim shortcuts. Ultimately, I will probably move to doing more IDE-related work in native Vim too.

Tags: Vim, Text Editing

Plenoxels and Neural Radiance Fields using PyTorch: Part 6

27 December 2022

This is part of a series of posts breaking down the paper Plenoxels: Radiance Fields without Neural Networks, and providing (hopefully) well-annotated source code to aid in understanding.

Tags: Machine Learning, PyTorch, Programming, Neural Radiance Fields, Machine Vision

Plenoxels and Neural Radiance Fields using PyTorch: Part 5

19 December 2022

This is part of a series of posts breaking down the paper Plenoxels: Radiance Fields without Neural Networks, and providing (hopefully) well-annotated source code to aid in understanding.

Tags: Machine Learning, PyTorch, Programming, Neural Radiance Fields, Machine Vision

Plenoxels and Neural Radiance Fields using PyTorch: Part 4

18 December 2022

This is part of a series of posts breaking down the paper Plenoxels: Radiance Fields without Neural Networks, and providing (hopefully) well-annotated source code to aid in understanding.

Tags: Machine Learning, PyTorch, Programming, Neural Radiance Fields, Machine Vision

Plenoxels and Neural Radiance Fields using PyTorch: Part 3

7 December 2022

This is part of a series of posts breaking down the paper Plenoxels: Radiance Fields without Neural Networks, and providing (hopefully) well-annotated source code to aid in understanding.

Tags: Machine Learning, PyTorch, Programming, Neural Radiance Fields, Machine Vision

Plenoxels and Neural Radiance Fields using PyTorch: Part 2

5 December 2022

This is part of a series of posts breaking down the paper Plenoxels: Radiance Fields without Neural Networks, and providing (hopefully) well-annotated source code to aid in understanding.

Tags: Machine Learning, PyTorch, Programming, Neural Radiance Fields, Machine Vision

Plenoxels and Neural Radiance Fields using PyTorch: Part 1

4 December 2022

This is part of a series of posts breaking down the paper Plenoxels: Radiance Fields without Neural Networks, and providing (hopefully) well-annotated source code to aid in understanding.

Tags: Machine Learning, PyTorch, Programming, Neural Radiance Fields, Machine Vision

Transformers using PyTorch : Worklog Part 1

29 November 2022

It may seem strange that I’m jumping from implementing a simple neural network into Transformers. I will return to building up the foundations of neural networks soon enough: for the moment, let’s build a Transformer using PyTorch.

Tags: Machine Learning, PyTorch, Programming, Deep Learning, Transformers

The No-Questions Asked Guide to PyTorch : Part 1

27 November 2022

Programming guides are probably the first posts to become obsolete, as APIs are updated. Regardless, we will look at building simple neural networks in PyTorch. We won’t be starting from models with a million parameters, however. We will proceed from the basics, starting with a single neuron, talk a little about the tensor notation and how that relates to our usual mathematical notation of representing everything with column vectors, and scale up from there.

Tags: Machine Learning, PyTorch, Programming, Neural Networks

A Quick Note on Proving the Triangle Inequality on a Derived Distance Metric using Monotonicity

20 October 2022

This is a quick note on proving the Triangle Inequality criterion of the following claim:

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Functional Analysis Exercises 12 : Linear Operators

13 December 2021

This post lists solutions to the exercises in the Linear Operators section 2.6 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Functional Analysis Exercises 11 : Compactness and Finite Dimension

2 December 2021

This post lists solutions to the exercises in the Compactness and Finite Dimension section 2.5 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Functional Analysis Exercises 10 : Finite Dimensional Normed Spaces and Subspaces

22 November 2021

This post lists solutions to the exercises in the Finite Dimensional Normed Spaces and Subspaces section 2.4 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Functional Analysis Exercises 9 : Further Properties of Normed Spaces

19 November 2021

This post lists solutions to the exercises in the Further Properties of Normed Spaces section 2.3 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Two Phase Commit: Indistinguishable Commit Scenario

18 November 2021

We review the most interesting failure scenario for the Two Phase Commit (2PC) protocol. There are excellent explanations of 2PC out there, and I won’t bother too much with the basic explanation. The focus of this post is a walkthrough of the indistinguishable state scenario, where neither a global commit, nor a global abort command can be issued.

Tags: Distributed Systems, Software Engineering

Functional Analysis Exercises 8 : Normed and Banach Spaces

11 November 2021

This post lists solutions to the exercises in the Normed Space, Banach Space section 2.2 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Resilient Knowledge Bases : Fundamentals, not Buzzwords

6 November 2021

We start this tirade with a memorable quote from Alice Through the Looking-Glass:

Tags: Technology, Resilient Knowledge Base

Functional Analysis Exercises 7 : Vector Spaces

3 November 2021

This post lists solutions to the exercises in the Vector Space section 2.1 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Functional Analysis Exercises 6 : Completion of Metric Spaces

25 October 2021

This post lists solutions to the exercises in the Completion of Metric Spaces section 1.6 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Functional Analysis Exercises 5 : Completeness Proofs

16 October 2021

This post lists solutions to the exercises in the Completeness Proofs section 1.5 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Functional Analysis Exercises 4 : Convergence, Cauchy Sequences, and Completeness

12 October 2021

This post lists solutions to the exercises in the Convergence, Cauchy Sequences, and Completeness section 1.4 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Assorted Analysis Proofs

11 October 2021

This post lists assorted proofs from Analysis, without any particular theme.

Tags: Mathematics, Proof, Analysis, Pure Mathematics

Functional Analysis Exercises 3 : Sets, Continuous Mappings, and Separability

7 October 2021

This post lists solutions to many of the exercises in the Open Set, Closed Set, Neighbourhood section 1.3 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

General Proof Tactics for Real and Functional Analysis

29 September 2021

This article represents a (very short) collection of my ongoing notes on proof tactics I’ve found useful when I’ve been stuck trying to solve proof exercises. I aim to continue documenting these in as much detail as possible. These are mostly aids while building intuition about how to prove something, and gradually should become part of one’s mental lexicon.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics

Functional Analysis Exercises 2 : Distance Metrics

28 September 2021

This post lists solutions to many of the exercises in the Distance Metrics section 1.2 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is a work in progress, and proofs may be refined over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Important Inequalities in Functional Analysis

27 September 2021

Continuing my self-study of Functional Analysis, this post describes proofs for the following important inequalities in the subject:

  • Young’s Inequality
  • Hölder’s Inequality
  • Minkowski’s Inequality
Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics

Functional Analysis Exercises 1 : Distance Metrics

21 September 2021

This post lists solutions to many of the exercises in the Distance Metrics section 1.1 of Erwin Kreyszig’s Introductory Functional Analysis with Applications. This is definitely a work in progress, and proofs may be refined or added over time.

Tags: Mathematics, Proof, Functional Analysis, Pure Mathematics, Kreyszig

Gaussian Processes: Theory

10 September 2021

In this article, we will build up our mathematical understanding of Gaussian Processes. We will understand the conditioning operation a bit more, since that is the backbone of inferring the posterior distribution. We will also look at how the covariance matrix evolves as training points are added.

Tags: Theory, Gaussian Processes, Probability, Machine Learning

Gaussian Processes: Intuition

6 September 2021

In this article, we will build up our intuition of Gaussian Processes, and try to understand how it models uncertainty about data it has not encountered yet, while still being useful for regression. We will also see why the Covariance Matrix (and consequently, the Kernel) is a fundamental building block of our assumptions around the data we are trying to model.

Tags: Theory, Gaussian Processes, Probability, Machine Learning

Geometry of the Multivariate Gaussian Distribution

30 August 2021

Continuing from the roadmap set out in Road to Gaussian Processes, we begin with the geometry of the central object which underlies this Machine Learning Technique, the Multivariate Gaussian Distribution. We will study its form to build up some geometric intuition around its interpretation.

Tags: Theory, Multivariate Gaussian Distribution, Probability

Statistics from Geometry and Linear Algebra

12 August 2021

This article covers some common statistical quantities/metrics which can be derived from Linear Algebra and corresponding intuitions from Geometry, without recourse to Probability or Calculus. Of course, those subjects add more rigour and insight into these concepts, but our aim is to provide a form of intuitive shorthand for the reader.

Tags: Theory, Statistics, Linear Algebra

Non-Linear Support Vector Machines: Radial Basis Function Kernel and the Kernel Trick

7 August 2021

This article builds upon the previous material on kernels and Support Vector Machines to introduce some simple examples of Reproducing Kernels, including a simplified version of the frequently-used Radial Basis Function kernel. Beyond that, we finally look at the actual application of kernels and the so-called Kernel Trick to avoid expensive computation of projections of data points into higher-dimensional space, when working with Support Vector Machines.

Tags: Machine Learning, Kernels, Theory, Functional Analysis, Support Vector Machines

Kernel Functions with Reproducing Kernel Hilbert Spaces

20 July 2021

This article uses the previous mathematical groundwork to discuss the construction of Reproducing Kernel Hilbert Spaces. We’ll make several assumptions that have been proved and discussed in those articles. There are multiple ways of discussing Kernel Functions, like the Moore–Aronszajn Theorem and Mercer’s Theorem. We may discuss some of those approaches in the future, but here we will focus on the constructive approach here to characterise Kernel Functions.

Tags: Machine Learning, Kernels, Theory, Functional Analysis, Linear Algebra

Functional Analysis: Norms, Operators, and Some Theorems

19 July 2021

This article expands the groundwork laid in Kernel Functions: Functional Analysis and Linear Algebra Preliminaries to discuss some more properties and proofs for some of the properties of functions that we will use in future discussions on Kernel Methods in Machine Learning, including (but not restricted to) the construction of Reproducing Kernel Hilbert Spaces.

Tags: Mathematics, Theory, Operator Theory, Functional Analysis, Pure Mathematics

Functional and Real Analysis Notes

18 July 2021

These are personal study notes, brief or expanded, complete or incomplete. Some concepts here will be alluded to in full-fledged Machine Learning posts.

Tags: Mathematics, Theory, Notes, Functional Analysis, Pure Mathematics

Kernel Functions: Functional Analysis and Linear Algebra Preliminaries

17 July 2021

This article lays the groundwork for an important construction called Reproducing Kernel Hilbert Spaces, which allows a certain class of functions (called Kernel Functions) to be a valid representation of an inner product in (potentially) higher-dimensional space. This construction will allow us to perform the necessary higher-dimensional computations, without projecting every point in our data set into higher dimensions, explicitly, in the case of Non-Linear Support Vector Machines, which will be discussed in the upcoming article.

Tags: Machine Learning, Kernels, Theory, Functional Analysis, Linear Algebra

Real Analysis: Patterns for Proving Irrationality of Square Roots

9 July 2021

ld Continuing on my journey through Real Analysis, we will focus here on common proof patterns which apply to irrational square roots. These patterns apply to the following sort of proof exercises:

Tags: Real Analysis, Mathematics, Proof, Pure Mathematics

The Cholesky and LDL* Factorisations

8 July 2021

This article discusses a set of two useful (and closely related) factorisations for positive-definite matrices: the Cholesky and the \(LDL^T\) factorisations. Both of them find various uses: the Cholesky factorisation particularly is used when solving large systems of linear equations.

Tags: Machine Learning, Theory, Linear Algebra

The Gram-Schmidt Orthogonalisation

27 May 2021

We discuss an important factorisation of a matrix, which allows us to convert a linearly independent but non-orthogonal basis to a linearly independent orthonormal basis. This uses a procedure which iteratively extracts vectors which are orthonormal to the previously-extracted vectors, to ultimately define the orthonormal basis. This is called the Gram-Schmidt Orthogonalisation, and we will also show a proof for this.

Tags: Machine Learning, Linear Algebra, Proofs, Theory

Real Analysis Proofs #1

18 May 2021

Since I’m currently self-studying Real Analysis, I’ll be listing down proofs I either initially had trouble understanding, or enjoyed proving, here. These are very mathematical posts, and are for personal documentation, mostly.

Tags: Real Analysis, Mathematics, Proof, Pure Mathematics

Support Vector Machines from First Principles: Linear SVMs

10 May 2021

We have looked at how Lagrangian Multipliers and how they help build constraints as part of the function that we wish to optimise. Their relevance in Support Vector Machines is how the constraints about the classifier margin (i.e., the supporting hyperplanes) is incorporated in the search for the optimal hyperplane.

Tags: Machine Learning, Support Vector Machines, Optimisation, Theory

Quadratic Optimisation: Lagrangian Dual, and the Karush-Kuhn-Tucker Conditions

10 May 2021

This article concludes the (very abbreviated) theoretical background required to understand Quadratic Optimisation. Here, we extend the Lagrangian Multipliers approach, which in its current form, admits only equality constraints. We will extend it to allow constraints which can be expressed as inequalities.

Tags: Machine Learning, Quadratic Optimisation, Linear Algebra, Optimisation, Theory

Quadratic Optimisation: Mathematical Background

8 May 2021

This article continues the original discussion on Quadratic Optimisation, where we considered Principal Components Analysis as a motivation. Originally, this article was going to begin delving into the Lagrangian Dual and the Karush-Kuhn-Tucker Theorem, but the requisite mathematical machinery to understand some of the concepts necessitated breaking the preliminary setup into its own separate article (which you’re now reading).

Tags: Quadratic Optimisation, Optimisation, Theory

Common Ways of Looking at Matrix Multiplications

29 April 2021

We consider the more frequently utilised viewpoints of matrix multiplication, and relate it to one or more applications where using a certain viewpoint is more useful. These are the viewpoints we will consider.

Tags: Machine Learning, Linear Algebra, Theory

Intuitions about the Implicit Function Theorem

29 April 2021

We discussed the Implicit Function Theorem at the end of the article on Lagrange Multipliers, with some hand-waving to justify the linear behaviour on manifolds in arbitrary \(\mathbb{R}^N\).

Tags: Vector Calculus, Linear Algebra, Theory, Mathematics

Quadratic Optimisation using Principal Component Analysis as Motivation: Part Two

28 April 2021

We pick up from where we left off in Quadratic Optimisation using Principal Component Analysis as Motivation: Part One. We treated Principal Component Analysis as an optimisation, and took a detour to build our geometric intuition behind Lagrange Multipliers, wading through its proof to some level.

Tags: Machine Learning, Quadratic Optimisation, Linear Algebra, Principal Components Analysis, Optimisation, Theory

Vector Calculus: Lagrange Multipliers, Manifolds, and the Implicit Function Theorem

24 April 2021

In this article, we finally put all our understanding of Vector Calculus to use by showing why and how Lagrange Multipliers work. We will be focusing on several important ideas, but the most important one is around the linearisation of spaces at a local level, which might not be smooth globally. The Implicit Function Theorem will provide a strong statement around the conditions necessary to satisfy this.

Tags: Machine Learning, Optimisation, Vector Calculus, Lagrange Multipliers, Theory

Vector Calculus: Graphs, Level Sets, and Constraint Manifolds

20 April 2021

In this article, we take a detour to understand the mathematical intuition behind Constrained Optimisation, and more specifically the method of Lagrangian multipliers. We have been discussing Linear Algebra, specifically matrices, for quite a bit now. Optimisation theory, and Quadratic Optimisation as well, relies heavily on Vector Calculus for many of its results and proofs.

Tags: Machine Learning, Vector Calculus, Theory

Quadratic Optimisation using Principal Component Analysis as Motivation: Part One

19 April 2021

This series of articles presents the intuition behind the Quadratic Form of a Matrix, as well as its optimisation counterpart, Quadratic Optimisation, motivated by the example of Principal Components Analysis. PCA is presented here, not in its own right, but as an application of these two concepts. PCA proper will be presented in another article where we will discuss eigendecomposition, eigenvalues, and eigenvectors.

Tags: Machine Learning, Quadratic Optimisation, Linear Algebra, Principal Components Analysis, Optimisation, Theory

Road to Gaussian Processes

17 April 2021

This article aims to start the road towards a theoretical intuition behind Gaussian Processes, another Machine Learning technique based on Bayes’ Rule. However, there is a raft of material that I needed to understand and relearn before fully appreciating some of the underpinnings of this technique.

Tags: Machine Learning, Gaussian Processes, Theory

Support Vector Machines from First Principles: Part One

14 April 2021

We will derive the intuition behind Support Vector Machines from first principles. This will involve deriving some basic vector algebra proofs, including exploring some intuitions behind hyperplanes. Then we’ll continue adding to our understanding the concepts behind quadratic optimisation.

Tags: Machine Learning, Support Vector Machines, Theory

Dot Product: Algebraic and Geometric Equivalence

11 April 2021

The dot product of two vectors is geometrically simple: the product of the magnitudes of these vectors multiplied by the cosine of the angle between them. What is not immediately obvious is the algebraic interpretation of the dot product.

Tags: Machine Learning, Linear Algebra, Dot Product, Theory

Linear Regression: Assumptions and Results using the Maximum Likelihood Estimator

5 April 2021

Let’s look at Linear Regression. The “linear” term refers to the fact that the output variable is a linear combination of the input variables.

Tags: Machine Learning, Linear Regression, Maximum Likelihod Estimator, Theory, Probability

Matrix Rank and Some Results

4 April 2021

I’d like to introduce some basic results about the rank of a matrix. Simply put, the rank of a matrix is the number of independent vectors in a matrix. Note that I didn’t say whether these are column vectors or row vectors; that’s because of the following section which will narrow down the specific cases (we will also prove that these numbers are equal for any matrix).

Tags: Machine Learning, Linear Algebra, Theory

Assorted Intuitions about Matrices

3 April 2021

Some of these points about matrices are worth noting down, as aids to intuition. I might expand on some of these points into their own posts.

Tags: Machine Learning, Linear Algebra, Theory

Matrix Outer Product: Columns-into-Rows and the LU Factorisation

2 April 2021

We will discuss the Column-into-Rows computation technique for matrix outer products. This will lead us to one of the important factorisations (the LU Decomposition) that is used computationally when solving systems of equations, or computing matrix inverses.

Tags: Machine Learning, Linear Algebra, Theory

Intuitions about the Orthogonality of Matrix Subspaces

2 April 2021

This is the easiest way I’ve been able to explain to myself around the orthogonality of matrix spaces. The argument will essentially be based on the geometry of planes which extends naturally to hyperplanes.

Tags: Machine Learning, Linear Algebra, Theory

Matrix Outer Product: Value-wise computation and the Transposition Rule

1 April 2021

We will discuss the value-wise computation technique for matrix outer products. This will lead us to a simple sketch of the proof of reversal of order for transposed outer products.

Tags: Machine Learning, Linear Algebra, Theory

Matrix Outer Product: Linear Combinations of Vectors

30 March 2021

Matrix multiplication (outer product) is a fundamental operation in almost any Machine Learning proof, statement, or computation. Much insight may be gleaned by looking at different ways of looking matrix multiplication. In this post, we will look at one (and possibly the most important) interpretation: namely, the linear combination of vectors.

Tags: Machine Learning, Linear Algebra, Theory

Vectors, Normals, and Hyperplanes

29 March 2021

Linear Algebra deals with matrices. But that is missing the point, because the more fundamental component of a matrix is what will allow us to build our intuition on this subject. This component is the vector, and in this post, I will introduce vectors, along with common notations of expression.

Tags: Machine Learning, Linear Algebra, Theory

Machine Learning Theory Track

28 March 2021

I’ve always been fascinated by Machine Learning. This began in the seventh standard when I discovered a second-hand book on Neural Networks for my ZX Spectrum.

Tags: Machine Learning, Theory

First Post

28 March 2021

My previous Wordpress site went kaput. With it went most of the content that I’ve posted over the years. I have switched over to Jekyll, and will be adding new material, as well as reintroducing old material as needed, from a backup that I have. Stay posted!

Tags: General

     

All content on this site was written by a human. No language models were used to generate, summarise, or rephrase any of the content here.