multi-class,multi-label与multi-task的区别

本文详细解释了多分类(Multiclass classification)、多标签分类(Multilabel classification)及多任务分类(Multi-task classification)的区别与联系,并通过实例说明了这些概念的应用场景。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转自:https://blog.youkuaiyun.com/golden1314521/article/details/51251252

一直很纠结Multi-class, Multi-label 以及 Multi-task 各自的区别和联系,最近找到了以下的说明资料:

  • Multiclass classification means a classification task with more than two classes; e.g., classify a set of images of fruits which may be oranges, apples, or pears. Multiclass classification makes the assumption that each sample is assigned to one and only one label: a fruit can be either an apple or a pear but not both at the same time.

  • Multilabel classification assigns to each sample a set of target labels. This can be thought as predicting properties of a data-point that are not mutually exclusive, such as topics that are relevant for a document. A text might be about any of religion, politics, finance or education at the same time or none of these.

  • Multioutput-multiclass classification and multi-task classification means that a single estimator has to handle several joint classification tasks. This is a generalization of the multi-label classification task, where the set of classification problem is restricted to binary classification, and of the multi-class classification task. The output format is a 2d numpy array or sparse matrix.

    The set of labels can be different for each output variable. For instance a sample could be assigned “pear” for an output variable that takes possible values in a finite set of species such as “pear”, “apple”, “orange” and “green” for a second output variable that takes possible values in a finite set of colors such as “green”, “red”, “orange”, “yellow”…

    This means that any classifiers handling multi-output multiclass or multi-task classification task supports the multi-label classification task as a special case. Multi-task classification is similar to the multi-output classification task with different model formulations. For more information, see the relevant estimator documentation.

可以看出:

  • Multiclass classification 就是多分类问题,比如年龄预测中把人分为小孩,年轻人,青年人和老年人这四个类别。Multiclass classification 与 binary classification相对应,性别预测只有男、女两个值,就属于后者。
  • Multilabel classification 是多标签分类,比如一个新闻稿A可以与{政治,体育,自然}有关,就可以打上这三个标签。而新闻稿B可能只与其中的{体育,自然}相关,就只能打上这两个标签。
  • Multioutput-multiclass classification 和 multi-task classification 指的是同一个东西。仍然举前边的新闻稿的例子,定义一个三个元素的向量,该向量第1、2和3个元素分别对应是否(分别取值1或0)与政治、体育和自然相关。那么新闻稿A可以表示为[1,1,1],而新闻稿B可以表示为[0,1,1],这就可以看成是multi-task classification 问题了。 从这个例子也可以看出,Multilabel classification是一种特殊的multi-task classification 问题。之所以说它特殊,是因为一般情况下,向量的元素可能会取多于两个值,比如同时要求预测年龄和性别,其中年龄有四个取值,而性别有两个取值。

这里写图片描述

### Few-Shot Learning Introduction Few-shot learning refers to a class of machine learning problems where the model is required to learn from very few examples, typically one or just a handful per category. This approach mimics human ability to generalize from limited data and has become an important area within deep learning research. The task layer's prior knowledge includes all methods that "learn how to learn," such as optimizing parameters for unseen tasks through meta-learning techniques which can provide good initialization for novel tasks[^1]. In this context: - **Meta-Learning**: Aims at designing models capable of fast adaptation with minimal training samples by leveraging previously acquired experience. - **Metric Learning**: Focuses on learning distance metrics between instances so similar ones are closer together while dissimilar remain apart in embedding space. #### Applications in Machine Learning One prominent application involves fine-grained classification using small datasets like Mini-ImageNet, demonstrating performance improvements when comparing different algorithms' embeddings propagation capabilities over time steps (Figure 7)[^2]. Another example comes from multi-label classification scenarios where combining MLP classifiers alongside KNN-based predictions enhances overall accuracy compared to traditional approaches relying solely upon prototype definitions derived directly from support sets during inference phases[^3]. Moreover, hybrid embedding strategies have been explored; these integrate both generalizable features learned across diverse domains along with specialized adjustments made specifically towards target-specific characteristics present only within given training distributions[Dtrain], thereby improving adaptability without sacrificing efficiency too much relative purely invariant alternatives[^4]. ```python def few_shot_classifier(embedding_model, classifier_type='mlp_knn'): """ Demonstrates a simple implementation outline for integrating Multi-layer Perceptron (MLP) and k-nearest neighbors (KNN). Args: embedding_model: Pre-trained neural network used to generate feature vectors. classifier_type: Type of final decision mechanism ('mlp', 'knn', or 'mlp_knn'). Returns: Combined prediction scores based on selected strategy. """ pass # Placeholder function body ``` --related questions-- 1. What specific challenges do few-shot learning systems face? 2. How does metric learning contribute to enhancing few-shot recognition abilities? 3. Can you explain more about the role of prototypes in few-shot classification schemes? 4. Are there any notable differences between MAML and other optimization-based meta-learning frameworks? 5. Which types of real-world problems benefit most significantly from applying few-shot learning methodologies?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值