【论文笔记】Decoupling Representation and Classifier for Long-Tailed Recognition

这一篇其实并不是提出什么新的东西,而且是做了点类似综述的技术调用实验。省流:T-normalization最好用

摘要

现状:Existing solutions usually involve class-balancing strategies, e.g. by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers.
做法:we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition.
结论:The findings are surprising: (1)data imbalance might not be an issue in learning high-quality representations; (2)with representations learned with the simplest instance-balanced sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier.

Classification For LONG-TAILED RECOGNITION

cRT(Classifier Re-training)
re-train the classifier with class-balanced sampling. That is, keeping the representations fixed, we ramdomly re-initialize and optimize the classifier weights W and b for a small number of epochs using class-balanced sampling.

NCM (Nearest Class Mean classifier)

  1. compute the mean feature representation for each class on the training set
  2. perform nearest neightbor search either using cosine similarity or the Euclidean distance computed on L2-normalized mean features.
  3. the cosine similarity alleviates the weight imbalance problem via its inherent normalization.

t-normalized(t-normalized classifier)
inspired: after joint training with instance-balanced sampling, the norms of the weights || wj || are correlated with the cardinality of the classes nj, while, after fine-tuning the classifiers using class-balanced sampling, the norms of the classifier weights tend to be more similar.
做法:adjusting the classifier weight norms directly through the following t-normalization procedure.

LWS(Learnable weight scaling)
Another way of interpreting t-normalization would be to think of it as a re-scaling of the magnitude for each classifier wi keeping the direction unchanged.(将T-norm 转化为一种对每个分类器权重大小的重新缩放)

Sampling Strategies

Instance-balanced sampling:the most common way of sampling data, where each training example has equal probability of being selected.

Class-balanced sampling:each class has an equal probability of being selected. One can see this as a two-stage sampling strategy, where first a class is selected uniformly from the set of classes, and then an instance from that class is subsequently uniformly sampled.

Square-root sampling
A number of variants of the previous sampling strategies have been explored.

Progressive-balanced sampling
This involves first using instance-balanced sampling for a number of epochs, and then class-balanced sampling for the last epochs.

Experiments

As illustrated in Fig.4, this yields a wider classfication boundary in feature space, allowing the classifier to have much higher accuracy on data-rich classes, but hurting data-scarce classes. t-normalized classifiers alleviate this issue to some extent by providing more balanced classifier weight magnitudes.
在这里插入图片描述
官方提供的代码链接: https://github.com/facebookresearch/classifier-balancing.

<think>好的,用户让我翻译“note suggest replicating both decoupling cap values, number and placement”。首先,我需要确定这句话的上下文。看起来是关于电子电路设计的,尤其是去耦电容的应用。用户之前询问过“Questions or Comments”的翻译,可能是在技术文档或工程相关的环境中工作。 首先,分解句子结构。“note”可能是指文档中的注释或注意事项,“suggest replicating both decoupling cap values, number and placement”意思是建议复制去耦电容的值、数量和位置。需要确认专业术语的正确翻译,比如“decoupling cap”是“去耦电容”,没错。 然后,考虑用户可能的背景。用户可能是工程师或技术人员,需要准确的翻译用于设计文档或电路图注释。之前的对话中,用户关注技术术语的翻译和使用场景,所以这次翻译也需要保持专业性和准确性。 接下来,检查是否有需要特别注意的地方。“replicating”在这里是“复制”还是“重现”?根据电路设计的常见做法,可能是指在不同部分使用相同的去耦电容配置,确保一致性。因此,“建议在...位置完全复制”更合适。 用户可能的需求不仅是翻译,还有对技术建议的理解。需要确保翻译后的句子在技术上是准确的,并且符合行业标准。例如,“decoupling cap values”翻译为“去耦电容的容值”,“number”是数量,“placement”是布局位置。 另外,用户可能希望了解为何要复制这些参数,所以解释部分需要简明扼要,说明这样做的好处,比如减少噪声,保证信号完整性等。同时,举例应用场景,如PCB设计、模块复用等,帮助用户更好地理解使用情境。 最后,确认翻译和建议是否覆盖了用户可能的深层需求。用户可能不仅需要翻译,还想知道如何在实际项目中应用这一建议,或者在文档中如何正确表达。因此,提供结构化的翻译和解释,加上示例,能够满足用户的需求。</think>以下是关于 **"note suggest replicating both decoupling cap values, number and placement"** 的翻译与技术解析: -- ### **1. 核心翻译** **建议完全复制去耦电容的容值、数量及布局位置** (注:强调设计复用的一致性) --- ### **2. 技术内涵解析** | **关键词** | 工程含义 | |--| | **replicating** | 在相同/对称电路中复用相同参数,确保阻抗匹配和噪声抑制性能一致 | | **decoupling cap** | 高频去耦电容,用于抑制电源轨瞬态噪声(典型值:0.1μF+10μF 组合) | | **placement** | 重点指电容与芯片电源引脚的距离(需满足 ESL 最小化原则) | --- ### **3. 典型应用场景** - **PCB 对称电路设计** 例如:差分信号对的电源端、ADC 的 AVDD/DVDD 供电网络 - **多通道系统** 例如:多路运放、FPGA Bank 的电源分配 - **模块化设计复用** 例如:已验证的电源滤波方案在多块子板间的移植 --- ### **4. 实施建议模板** ```markdown ### 去耦电容配置规范 - **容值组合**:10μF (0805) + 0.1μF (0402) 并联 - **布局要求**: ▶ 0.1μF 电容距芯片电源引脚 ≤ 2mm ▶ 10μF 电容位于电源入口端 - **复用规则**: 相同功能模块需 **严格复制** 电容参数与走线拓扑 ``` --- ### **5. 扩展知识** **为何强调"replicating"?** 在高速电路设计中,去耦电容的参数和位置直接影响: ✅ 电源完整性(Power Integrity) ✅ 电磁兼容性(EMC) ✅ 信号上升沿质量 **差异化布局可能导致**:通道间时序偏移、共模噪声增大等问题。 如果需要更具体的 PCB 设计规范模板,可以进一步说明应用领域(如:射频电路、高速数字电路等)。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值