
Abstract
This paper proposed to make a first step towards compatible and hence reusable network components. Split a network into two components: a feature extractor and a target task head. 最终验证在三个应用上,unsupervised domain adaptation, transferring classifiers across feature extractors with different architectures, and increasing the computational efficiency of transfer learning.(对应的三个任务为domain adaptation, classifier transferability and efficient transfer learning)
Introduction
We believe that a general way to achieve network reusability is to build a large library of compatible components which are specialized for different tasks;
We make a first step in this direction by devising a training procedure to make the feature representations learnt on different tasks become compatible, without any post-hoc fine-tuning;
The compatibility of components saves the designer the effort to make them work together in a new combination, so they are free to focus on designing ever more complex models.
We say two networks are compatible if we can recombine the feature extractor of one network with the task head of the other while still producing good predictions, directly without any fine-tuning after recombination;
Method
Sec.3 introduce three ways to alter the training procedure of neural networks to encourage compatibility. Compatibility的定义和讨论:

Conclusion
We have demonstrated that we can train networks to produce compatible features, without compromising accuracy on the original tasks.
Key points: 提了一个新步骤,学习一堆兼容性强的网络组件;针对每个任务只需要组合这些组件;如何定义compatible,全文围绕compatibility来写(方法布局和实验部分布局);
兼容网络组件的训练方法

本文提出了一种训练兼容网络组件的方法,通过将网络分解为特征提取器和目标任务头,实现了不同任务间的组件复用。该方法无需额外的微调即可重组组件,并在domain adaptation、classifier transferability和efficient transfer learning三个任务上进行了验证。
1565

被折叠的 条评论
为什么被折叠?



