To improve accuracy in transfer learning where a MLP is pre-trained on dataset A such as ImageNet, and then transferred to a new dataset (picture left), using PyTorch we added a new data-driven loss-layer to DenseNet to extract more features on dataset A (picture right). We observed no improvement. We also replicated previous transfer learning results relating freezing-layer depth with performance.

This was a class project for CS6784: Advanced Topics in Machine Learning, fall of 2017 at Cornell University taught by Kilian Weinberger.

Methods

Abstract

Transfer learning enables applying the learning from one dataset on another to improve performance. We replicate previous studies on the relationship between layer depth and feature dataset specificity in AlexNet to DenseNet. We also explored implementing a novel data driven layer that could help extract more features from the first dataset, but we didn’t find a significant improvement.