In the field of machine learning classification is one of the most common types to be deployed in society, with a wide amount of possible applications. However, a well-known problem in the field is classification is that of imbalanced datasets. Where many algorithms tend to favor the majority class and in some cases completely ignore the minority class. And in many cases the minority class is the most valuable one, leading to underperforming and undeployable implementations.There are many proposed solutions for this problem, they range from different algorithms, modifications of existing algorithms and data manipulation methods. This study tries to contribute to the field by benchmarking three commonly applied algorithms (Random forest, gradient boosted decision trees and multi-layer perceptron), in combination with three different data-manipulation methods (oversampling, undersampling and no data manipulation). This was done through experiments over three differently shaped datasets.The results point towards random forest being the best overall performing algorithm. But when it comes to data with a lot of categorical dimensions the multi-layer perceptron was the top performer. And when it comes to data-manipulation, undersampling was the best approach for all the datasets and algorithms.