High-Dimensional Sampling

Most of the content here I prepared for giving a talk on Bayesian inference. Some background stuff didn’t fit in the (introductory) talk itself, so this is more of a space for me to put the rest. Below, I will discuss a few things: Firstly how Bayesian inference leads into the need to sample from high-dimensional functions. I’ll then discuss some peculiarities of high-dimensional spaces. These will be used when I discuss a few different solution methods, to explain why and when they will work or fail....

23 July, 2024

Fourier Analysis for DS

Fourier Analysis for DS People going into Data Science as a profession tend to come from a diverse set of technical backgrounds. However, the last few years more and more come from specifically Data Science masters programs. Fourier Analysis is a topic that tends to not be discussed in these settings. I think it’s still interesting enough to dive into for a bit, both because of its interesting mathematics and because it can give a lot of insight when working with time-series data....

16 March, 2024

Understanding Neural Networks

Understanding of Neural nets from first-principles: Brain dump So I was reading my company’s IT newsletter the other day where one of the topics was sparse modeling (discussing this Forbes article) and it got me thinking again about some things I was reading the past months, about trying to understand how and why (mainly) Deep Learning works. First-principles In a way, sure we know how it works on the microscopic level of each individual neuron (activation functions, matrix multiplications, gradient descent and all that), and we also often describe it at a high level (where we tend to greatly anthropomorphize it: “the model learned to do X because in all its examples it saw this object from the same angle, ....

1 November, 2023