You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 29, 2022. It is now read-only.
This framework can be used to reproduce all experiments performed in “Feature
Partitioning for Efficient Multi-Task Architectures”. It provides a variety of
functionality for performing and managing deep learning experiments. In
particular, it helps manage meta-optimization which is useful
when doing hyper-parameter tuning and architecture search.
Please note, this is not an official Google product.
Cloud Instance Setup
Start a new Cloud Instance from “Deep Learning Image: PyTorch
1.0.0”. Almost everything needed to get the code up and running is
included automatically with the Deep Learning Image.
Following instructions assume the git repository has been pulled and placed in the home directory.
Data setup
Download and set up Visual Decathlon data and annotations:
wget https://www.robots.ox.ac.uk/~vgg/share/decathlon-1.0-devkit.tar.gz
wget https://www.robots.ox.ac.uk/~vgg/share/decathlon-1.0-data.tar.gz
tar zxf decathlon-1.0-devkit.tar.gz
mv decathlon-1.0 ~/mtl/data/decathlon
tar zxf decathlon-1.0-data.tar.gz -C ~/mtl/data/decathlon/data
cd ~/mtl/data/decathlon/data
for f in *.tar; do tar xf "$f"; done
The argument -e indicates the experiment name, and --config specifies the appropriate configuration file. Further details about network training can be found here.