High-Dimensional Sampling

Most of the content here I prepared for giving a talk on Bayesian inference. Some background stuff didn’t fit in the (introductory) talk itself, so this is more of a space for me to put the rest. Below, I will discuss a few things: Firstly how Bayesian inference leads into the need to sample from high-dimensional functions. I’ll then discuss some peculiarities of high-dimensional spaces. These will be used when I discuss a few different solution methods, to explain why and when they will work or fail....

23 July, 2024

Fourier Analysis for DS

Fourier Analysis for DS People going into Data Science as a profession tend to come from a diverse set of technical backgrounds. However, the last few years more and more come from specifically Data Science masters programs. Fourier Analysis is a topic that tends to not be discussed in these settings. I think it’s still interesting enough to dive into for a bit, both because of its interesting mathematics and because it can give a lot of insight when working with time-series data....

16 March, 2024

Impractical Time Telling II

This is a follow-up to the Impractical Time Telling post. Impractical Time Telling II: The Practicalities The plan was to do this in some downtime over the Christmas holidays, but the stars already aligned a few weeks ago. I found a Garmin Instinct 2S smartwatch in the house, and Garmin actually has a good setup (called Connect IQ) for developers to create their own apps, watchfaces, et cetera. Garmin development To develop a watchface, you should install the SDK on your machine, and install packages for the specific devices that you want to emulate....

14 December, 2023

Knot Theory

Playing with strings Knot theory is one of those topics where you start out by asking a very simple and natural question, follow a thread (hehe), then look around you and realize you’re knee-deep in at least 5 fields of math. The central topic of interest within knot theory is - you guessed it - knots. A knot in this context can be thought of as just a piece of string that is attached together at the ends....

2 December, 2023

Geometric Algebra

Vector Product Aesthetics The Beauty: Inner Product Think back for a minute to your first Linear Algebra course: Remember how nice inner products were to compute? Try to think of how to do it off the top of your head. If it’s a bit blurry: it’s just taking each component of the vectors, multiplying them and adding all the results: $$\mathbf{a \cdot b} = \sum_{i=0}^{N} a_i b_i$$ Calculating it gives you a scalar that says something about the angle between the two....

9 November, 2023

Impractical Time Telling

Impractical Time Telling You know how sometimes problems are just completely solved and thus boring? Like telling the time: We used to have sundials, now we have quartz watches, digital clocks… Yawn. A while ago I got a watch that was quite funky, although a bit of a challenge to read. It got me thinking, why not create a nice problem to solve by creating an impractical representation of the current time and make a watch face that can display it as reasonably as possible?...

3 November, 2023

Understanding Neural Networks

Understanding of Neural nets from first-principles: Brain dump So I was reading my company’s IT newsletter the other day where one of the topics was sparse modeling (discussing this Forbes article) and it got me thinking again about some things I was reading the past months, about trying to understand how and why (mainly) Deep Learning works. First-principles In a way, sure we know how it works on the microscopic level of each individual neuron (activation functions, matrix multiplications, gradient descent and all that), and we also often describe it at a high level (where we tend to greatly anthropomorphize it: “the model learned to do X because in all its examples it saw this object from the same angle, ....

1 November, 2023