Leveraging the Feature Distribution in Transfer-based Few-Shot Learning——论文翻译&笔记

摘要

在基于迁移的小样本学习中利用特征分布
过去,优秀的backbone和有效的后处理,使得基于transfer的方法达到了最强的性能。基于这一思路,本文提出了一种新的基于transfer的方法,在两点做了改进:

  1. 对特征向量预处理,使其接近高斯分布
  2. 利用一种optimal-transport 启发式算法,进行预处理

1.Intro

基于迁移结构(也叫backbone结构):通常训练的域与实际的域并不相同。所以,使用backbones提取的特征向量的分布非常复杂。因此,数据分布的强假设 会使得其方法不能很好的利用提取出的特征。所以,从两个方面解决了基于迁移的小样本学习问题。1,就是预处理特征使其拟合高斯。2,利用这个特定分布。(这个思想要归功于一个优秀的、基于最大后验和最优运输的算法)
在这里插入图片描述

2. 方法

2.1 问题定义:

Dbase 包含大量的有标签样本,来自K个类
Dnovel 也叫做task,包含少量的有标签样本(support set),以及一些无标签的样本(query set) 来自w个novel类。

注意support和query都属于Dnovel
目标是预测query set中的无标签样本。
所以,w way,s shot。以及q个无标签的样本。
所以,Dnovel中总共有w(s+q)个样本。ws个有标签,wq个用来分类。

2.2 特征提取

首先,仅使用Dbase来训练神经网络的backbone。本文中,同时使用了两个backbone,即WRN+Resnet和Densenet。训练好的backbones(

### Few-Shot Learning Introduction Few-shot learning refers to a class of machine learning problems where the model is required to learn from very few examples, typically one or just a handful per category. This approach mimics human ability to generalize from limited data and has become an important area within deep learning research. The task layer's prior knowledge includes all methods that "learn how to learn," such as optimizing parameters for unseen tasks through meta-learning techniques which can provide good initialization for novel tasks[^1]. In this context: - **Meta-Learning**: Aims at designing models capable of fast adaptation with minimal training samples by leveraging previously acquired experience. - **Metric Learning**: Focuses on learning distance metrics between instances so similar ones are closer together while dissimilar remain apart in embedding space. #### Applications in Machine Learning One prominent application involves fine-grained classification using small datasets like Mini-ImageNet, demonstrating performance improvements when comparing different algorithms' embeddings propagation capabilities over time steps (Figure 7)[^2]. Another example comes from multi-label classification scenarios where combining MLP classifiers alongside KNN-based predictions enhances overall accuracy compared to traditional approaches relying solely upon prototype definitions derived directly from support sets during inference phases[^3]. Moreover, hybrid embedding strategies have been explored; these integrate both generalizable features learned across diverse domains along with specialized adjustments made specifically towards target-specific characteristics present only within given training distributions[Dtrain], thereby improving adaptability without sacrificing efficiency too much relative purely invariant alternatives[^4]. ```python def few_shot_classifier(embedding_model, classifier_type='mlp_knn'): """ Demonstrates a simple implementation outline for integrating Multi-layer Perceptron (MLP) and k-nearest neighbors (KNN). Args: embedding_model: Pre-trained neural network used to generate feature vectors. classifier_type: Type of final decision mechanism ('mlp', 'knn', or 'mlp_knn'). Returns: Combined prediction scores based on selected strategy. """ pass # Placeholder function body ``` --related questions-- 1. What specific challenges do few-shot learning systems face? 2. How does metric learning contribute to enhancing few-shot recognition abilities? 3. Can you explain more about the role of prototypes in few-shot classification schemes? 4. Are there any notable differences between MAML and other optimization-based meta-learning frameworks? 5. Which types of real-world problems benefit most significantly from applying few-shot learning methodologies?
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值