With the advent of deep learning, there has been an ever-growing list of applications to which Deep Convolutional Neural Networks (DCNNs) can be applied. The field of Multi-Task Learning (MTL) attempts to provide optimizations to many-task systems, improving performance by optimization algorithms and structural changes to these networks. However, we have found that current MTL optimization algorithms often impose burdensome computation overheads, require meticulously labeled datasets, and do not adapt to tasks with significantly different loss distributions. We propose a new MTL optimization algorithm: Batch Swapping with Multiple Optimizers (BSMO). We utilize single-task labeled data to train on a multi-task hard parameter sharing (HPS) network through swapping tasks at the batch level. This dramatically increases the flexibility and scalability of training on an HPS network by allowing for per-task datasets and augmentation pipelines. We demonstrate the efficacy of BSMO versus current SOTA algorithms by benchmarking across contemporary benchmarks & networks.
|