Back to Basics- code & writeup for practical hyperparameter tunning exercise
Searching a broad hyperparameter space for the best configuration is a common task in ML projects. Here is a code framework, with writeup, on different options I bounce between. Give it a go!
This is a very common situation - you pickup some new project and identify the architecture you think will work best (e.g. vanilla neural network, XGBoost, 3D VAE, PCA+NN, whatever). But then you have a huge number of possible combinations (hyperparameters) pertaining to the model setup to pick from. How can we efficiently navigate this process? As I am writing this, I sense this may become the first part in a series of posts on this topic. But let’s take a practical start.