【论文阅读笔记0906】

Domain adaptation

[1] Cross-Domain Contrastive Learning for Time Series Clustering

(1) 在簇级和实例级使用对比约束将聚类和特征提取过程融合
(2) 在时域和频域提取特征,并利用对比学习增强域内特征表示
(3) 通过跨域对齐约束来优化特征表示和样本类簇分布
在这里插入图片描述

[2] Open-Set Graph Domain Adaptation via Separate Domain Alignment

解决问题:open-set graph domain adaptation that allows target graphs to contain unknown class nodes
two novel contributions for graph domain adaptation:

  1. target domain separation, and
  2. neighbor center clustering.
    The first part helps to dynamically split target nodes into certain and uncertain groups through entropy value, while the second part aims to refine the coarsely divided unknown group by pushing these nodes close to their neighbor centers.
    在这里插入图片描述

[3] Pushing the Limit of Fine-Tuning for Few-Shot Learning: Where Feature Reusing Meets Cross-Scale Attention

问题背景:federated domain generalization with data privacy protection
(1)To avoid the domain-specific information learned by local models, we propose intra-domain gradient matching, which minimizes the gradient discrepancy between original images and augmented images for learning the intrinsic semantic information.
(2) To reduce the domain shift across decentralized source domains, we propose inter-domain gradient matching, which minimizes the gradient discrepancy between the current model and the models from other domains.
(3) By combining intra-domain gradient matching and inter domain gradient matching, the domain shift can be reduced within isolated source domains and across decentralized source domains to generalize well on unseen target domain.
在这里插入图片描述

Feature selection/fusion

[1] Pushing the Limit of Fine-Tuning for Few-Shot Learning: Where Feature Reusing Meets Cross-Scale Attention

  1. a hybrid design named Intra-Block Fusion (IBF) to strengthen the extracted features within each convolution block;
  2. a novel Cross-Scale Attention (CSA) module to mitigate the scaling inconsistencies arising from the limited training samples, especially
    for cross-domain tasks.
    在这里插入图片描述

[2] Pushing the Limit of Fine-Tuning for Few-Shot Learning: Where Feature Reusing Meets Cross-Scale Attention

(1) The multifold feature extractors are constructed to extract different levels of shallow features from the input MR image
(2) MRI-transformer modules take features of different levels as inputs to extract transfer features, which are transferred
as supplementary features to MFRN.
在这里插入图片描述

[3] ResMatch: Residual Attention Learning for Feature Matching

(1) We propose residual attention learning for feature matching, termed ResMatch. Simple bypassing injection of relative positions and the similarity of descriptors facilitates feature matching learning.
(2) sResMatch with KNN-based linear attention is proposed
在这里插入图片描述

Contrastive Learning

[1] Pushing the Limit of Fine-Tuning for Few-Shot Learning: Where Feature Reusing Meets Cross-Scale Attention

问题背景: To tackle the challenge of long-tailed recognition, this paper analyzed two issues in supervised contrastive learning and address them with DSCL and PBSD
(1) The DSCL decouples two types of positives in SCL, and optimizes their relations toward different objectives to alleviate the influence of the imbalanced dataset.
(2). The PBSD leverages head classes to facilitate the representation learning in tail classes by exploring patch-level similarity relationships.
在这里插入图片描述

[2]DCLP: Neural Architecture Predictor with Curriculum Contrastive Learning

(1)utilize contrastive learning in neural predictors to leverage unlabeled data. This approach reduces the requirement for labeled training data of the predictor and improves its generalization ability.
(2) propose a novel curriculum method to guide the contrastive task, which makes the predictor converge faster and perform better in NAS.
在这里插入图片描述

[3] A learnable discrete-prior fusion autoencoder with constrative learning for tabular data analysis

(1)We propose to exploit the transformer attention mechanism for tabular feature semantic fusion. It fuses unimodal and multimodal features to generate latent representations in the encoder.
(2)We introduce a contrastive learning strategy to encourage the similar discrete feature distributions closer while pushing the dissimilar further away, which has a dynamic constraint on representativeness for latent embeddings.
在这里插入图片描述

[4] TopoGCL: Topological Graph Contrastive Learning

(1) we propose a new contrastive mode which targets topological representations of the two augmented views from the same graph, yielded by extracting latent shape properties of the graph at multiple resolutions.
(2) we introduce a new extended persistence summary, namely, extended persistence landscapes (EPL) and derive its theoretical stability guarantees.
在这里插入图片描述

Incremental/continual learning

[1] Multi-Domain Incremental Learning for Face Presentation Attack Detection

问题背景:in real-world scenarios, the original training data of the pre-trained model is not available due to data privacy or other reasons.
(1) an instance-wise router module is designed to select the relevant expert via obtaining the associated domain index based on the similarity with the domain centers
(2) the appropriate index guides the instance into the Domain-specific Experts blocks with the corresponding expert branch by gating mechanism.
(3) an asymmetric classifier is designed to solve the problem of PPD of live samples inconsistent in open appending domains.
在这里插入图片描述
在这里插入图片描述

[2] Multi-Domain Incremental Learning for Face Presentation Attack Detection

(1) formulate a continuous prompt function as a neural bottleneck and encode the collection of prompts on network weights
(2) establish a paired prompt memory system consisting of a stable reference and a flexible working prompt memory.
(3) we progressively fuse the working prompt memory and reference prompt memory during inter-task periods, resulting in continually evolved prompt memory

在这里插入图片描述

[3] Non-exemplar Online Class-Incremental Continual Learning via Dual-Prototype Self-Augment aand Refinement

  1. Dual class prototypes: vanilla and high-dimensional prototypes are exploited to utilize the pre-trained information and obtain robust quasi-orthogonal representations rather than example buffers for both privacy preservation and memory reduction.
  2. Self-augment and refinement: Instead of updating the whole network, we optimize high-dimensional prototypes alternatively with the
    extra projection module based on self-augment vanilla prototypes, through a bi-level optimization problem.
    在这里插入图片描述

[3] Cross-Class Feature Augmentation for Class Incremental Learning

(1)The proposed approach has an unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes using examples in other classes via adversarial attacks on a previously learned classifier.
(2)By allowing the Cross-Class Feature Augmentations (CCFA), each class in the old tasks conveniently populates samples in the feature space, which alleviates the collapse of the decision boundaries caused by sample deficiency for the previous tasks
在这里插入图片描述

[3] Summarizing Stream Data for Memory-Constrained Online Continual Learning

(1)the training gradients of real and summarized samples on the same network are matched.Thereby training the summarized samples provides similar parameter update results to the original images
(2)we employ the samples of previous tasks in the memory to help fit the overall distribution and provide more proper gradient supervision
(3)the consistency on the relationship to the previous samples between real and summarized samples also serves as a constraint for establishing better distribution in the memory
在这里插入图片描述

[4] Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation

(1) a replay buffer selection module is proposed to save hard negative samples for representation learning with high quality;
(2) Prototype-instance Relation Distillation (PRD) loss is designed to maintain the relationship between prototypes and sample representations using a self-distillation process.
在这里插入图片描述

[5] Towards Continual Knowledge Graph Embedding via Incremental Distillation

(1) a hierarchical strategy, ranking new triples for layer-by-layer learning is proposed to optimize the learning order;
(2) a novel incremental distillation mechanism is designed to facilitate the seamless transfer of entity representations from
the previous layer to the next one, promoting old knowledge preservation.
(3) a two-stage training paradigm is developed to avoid the over-corruption of old knowledge influenced by under-trained new knowledge
在这里插入图片描述

[6] Semi-Supervised Blind Image Quality Assessment through Knowledge Distillation and Incremental Learning

解决问题:模型对有标签数据需求大
(1)Knowledge distillation is utilized to assign pseudo-labels to unlabeled data to preserve analytical capability
(2) Experience replay is employed to alleviate the forgetting issue.
在这里插入图片描述

Time series data

[1] MSGNet: Learning Multi-Scale Inter-Series Correlations for Multivariate Time Series Forecasting

(1) frequency domain analysis and adaptive graph convolution are utilized to learn the varying inter-series correlations across multiple time scales
(2) The model incorporates a self-attention mechanism to capture intra-series dependencies while introducing an adaptive mixhop graph convolution layer to autonomously learn diverse inter-series correlations within each time scale.
在这里插入图片描述

[2] MSGNet: Learning Multi-Scale Inter-Series Correlations for Multivariate Time Series Forecasting

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值