Functional Programming Chronicles

The past year I’ve been dabbling in exploring some other programming languages and paradigms. This post collects some thoughts and learnings along the way. Note that this will not be a tutorial, although I will sprinkle in links to them if you are interested. Why As someone who never had a formal education in computer science (apart from the odd electives given in C and Lisp in university), a programming language has mostly been just a tool to implement my thoughts and algorithms and make them compile/run. After starting work as a data scientist, I quickly settled on Python to get the job done™ for most of my work. Compared to a lot of languages I knew up until then, it was simple and quick to get something going. It didn’t get in my way and allowed me to basically write code in English. It had an abundance of ML-related libraries which meant that for my work it was almost impossible to not use it. ...

27 June, 2025

High-Dimensional Sampling

Most of the content here I prepared for giving a talk on Bayesian inference. Some background stuff didn’t fit in the (introductory) talk itself, so this is more of a space for me to put the rest. Below, I will discuss a few things: Firstly how Bayesian inference leads into the need to sample from high-dimensional functions. I’ll then discuss some peculiarities of high-dimensional spaces. These will be used when I discuss a few different solution methods, to explain why and when they will work or fail. ...

23 July, 2024

Fourier Analysis for DS

Fourier Analysis for DS People going into Data Science as a profession tend to come from a diverse set of technical backgrounds. However, the last few years more and more come from specifically Data Science masters programs. Fourier Analysis is a topic that tends to not be discussed in these settings. I think it’s still interesting enough to dive into for a bit, both because of its interesting mathematics and because it can give a lot of insight when working with time-series data. ...

16 March, 2024

Impractical Time Telling II

This is a follow-up to the Impractical Time Telling post. Impractical Time Telling II: The Practicalities The plan was to do this in some downtime over the Christmas holidays, but the stars already aligned a few weeks ago. I found a Garmin Instinct 2S smartwatch in the house, and Garmin actually has a good setup (called Connect IQ) for developers to create their own apps, watchfaces, et cetera. Garmin development To develop a watchface, you should install the SDK on your machine, and install packages for the specific devices that you want to emulate. It’s highly recommended to then also install a PyCharm or VSCode extension for ‘Monkey C’. From your IDE you can then easily start an emulator for a device to test your software during development. Monkey C is Garmin’s own object-oriented language that should make it easy to use for app development. ...

14 December, 2023

Knot Theory

Playing with strings Knot theory is one of those topics where you start out by asking a very simple and natural question, follow a thread (hehe), then look around you and realize you’re knee-deep in at least 5 fields of math. The central topic of interest within knot theory is - you guessed it - knots. A knot in this context can be thought of as just a piece of string that is attached together at the ends. So if you feel like it, go and grab a piece of string lying around your house, or cut open a rubber band or whatever. It’s literally all you need. ...

2 December, 2023

Geometric Algebra

Vector Product Aesthetics The Beauty: Inner Product Think back for a minute to your first Linear Algebra course: Remember how nice inner products were to compute? Try to think of how to do it off the top of your head. If it’s a bit blurry: it’s just taking each component of the vectors, multiplying them and adding all the results: $$\mathbf{a \cdot b} = \sum_{i=0}^{N} a_i b_i$$ Calculating it gives you a scalar that says something about the angle between the two. It is as simple to do in 687D as it is in 2D. ...

9 November, 2023

Impractical Time Telling

Impractical Time Telling You know how sometimes problems are just completely solved and thus boring? Like telling the time: We used to have sundials, now we have quartz watches, digital clocks… Yawn. A while ago I got a watch that was quite funky, although a bit of a challenge to read. It got me thinking, why not create a nice problem to solve by creating an impractical representation of the current time and make a watch face that can display it as reasonably as possible? ...

3 November, 2023

Understanding Neural Networks

Understanding of Neural nets from first-principles: Brain dump So I was reading my company’s IT newsletter the other day where one of the topics was sparse modeling (discussing this Forbes article) and it got me thinking again about some things I was reading the past months, about trying to understand how and why (mainly) Deep Learning works. First-principles In a way, sure we know how it works on the microscopic level of each individual neuron (activation functions, matrix multiplications, gradient descent and all that), and we also often describe it at a high level (where we tend to greatly anthropomorphize it: “the model learned to do X because in all its examples it saw this object from the same angle, ..”). But there are many questions in between where it seems we never connected the dots, instead relying on empiricism, often crude observations and post-hoc justifications for choices here: ...

1 November, 2023