论文笔记:Deep Domain Confusion Maximizing for Domain Invariance

论文笔记:Deep Domain Confusion Maximizing for Domain Invariance

概述

提出编码MMD(最大 均值差异, Maximun Mean Discrepancy)来测量在 CNN中学习到的隐藏特征之间的距离。通过这种方法,网络通过最大化标签依赖性同时最小化域不变性来自动学习跨域表示。

方法

论文中的模型结构图
提出了如下损失函数来表示上述问题,
L = L C ( X L , y ) + λ MMD ⁡ 2 ( X S , X T ) \mathcal{L}=\mathcal{L}_{C}\left(X_{L}, y\right)+\lambda \operatorname{MMD}^{2}\left(X_{S}, X_{T}\right) L=LC(XL,y)+λMMD2(XS,XT)
其中,第一项是分类损失,第二项是域间距离。

文章添加了一个较低维度的“瓶颈”适应层。他们的直觉是,低维层可以用于正则化源域分类器的训练,并防止对源域分布的特定细微差别进行过拟合。

# DDC-transfer-learning A simple implementation of Deep Domain Confusion: Maximizing for Domain Invariance which is inspired by [transferlearning][https://github.com/jindongwang/transferlearning]. The project contains *Pytorch* code for fine-tuning *Alexnet* as well as *DDCnet* implemented according to the original paper which adds an adaptation layer into the Alexnet. The *office31* dataset used in the paper is also used in this implementation to test the performance of fine-tuning *Alexnet* and *DDCnet* with additional linear *MMD* loss. # Run the work * Run command `python alextnet_finetune.py` to fine-tune a pretrained *Alexnet* on *office31* dataset with *full-training*. * Run command `python DDC.py` to fine-tune a pretrained *Alexnet* on *office31* dataset with *full-training*. # Experiment Results Here we have to note that *full-training* protocol, which is taking all the samples from one domain as the source or target domain, and *dowm-sample* protocol, which is choosing 20 or 8 samples per category to use as the domain data, are quite different data preparation methods with different experiment results. | Methods | Results (amazon to webcame) | | :------: | :------: | | fine-tuning Alexnet (full-training) in *Pytorch* | Around 51% | | DDC ( pretrained Alexnet with adaptation layer and MMD loss) in *Pytorch* | Around 56% | # Future work - [ ] Write data loader using *down-sample* protocol mentioned in the paper instead of using *full-training* protocol. - [ ] Considering trying a tensorflow version to see if frameworks can have a difference on final experiment results. # Reference Tzeng E, Hoffman J, Zhang N, et al. Deep domain confusion: Maximizing for domain invariance[J]. arXiv preprint arXiv:1412.3474, 2014.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值