零样本学习公开代码整理

  1. 2018_CVPR_Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs [code]
  2. 2018_CVPR_Learning to Compare: Relation Network for Few-Shot Learning [code]
  3. 2018_CVPR_A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts [code]
  4. 2018_CVPR_Zero-Shot Visual Recognition Using Semantics-Preserving Adversarial Embedding Networks [code]
  5. 2018_CVPR_Feature Generating Networks for Zero-Shot Learning [code]
  6. 2018_ECCV_Learning Class Prototypes via Structure Alignment for Zero-Shot Recognition [code]
  7. 2018_ECCV_Multi-modal Cycle-consistent Generalized Zero-Shot Learning [code]
  8. 2018_ICFHR_Zero-Shot Learning Based Approach For Medieval Word Recognition Using Deep-Learned Features [code]
  9. 2018_EMNLP_Few-Shot and Zero-Shot Multi-Label Learning for Structured Label Spaces [code]
  10. 2018_arXiv_Rethinking Knowledge Graph Propagation for Zero-Shot Learning [coce]
  11. 2017_CVPR_Zero-Shot learning - The Good, the Bad and the Ugly [datasets & code]
  12. 2017_CVPR_Learning a Deep Embedding Model for Zero-Shot Learning [code]
  13. 2017_CVPR_Semantic Autoencoder for Zero-shot Learning [Pytorch code] [code]
  14. 2017_CVPR_Semantically Consistent Regularization for Zero-Shot Recognition [code]
  15. 2017_CVPR_Link the Head to the" Beak": Zero Shot Learning from Noisy Text Description at Part Precision [code]
  16. 2017_CVPR_Zero-Shot Recognition using Dual Visual-Semantic Mapping Paths [coce]
  17. 2017_ICCV_Attributes2Classname: A discriminative model for attribute-based unsupervised zero-shot learning [code]
  18. 2017_ICCV_Predicting visual exemplars of unseen classes for zero-shot learning [code]
  19. 2017_ICCV_Learning Discriminative Latent Attributes for Zero-Shot Classification [code]
  20. 2017_MasterThesis_Investigating Zero-Shot Learning techniques in multi-label scenarios [code]
  21. 2017_ECML_A simple exponential family framework for zero-shot learning [code]
  22. 2017_arXiv_Structure-propagation-for-zero-shot-learning [code]
  23. 2016_CVPR_Synthesized classifiers for zero-shot learning [code]
  24. 2016_CVPR_Latent Embeddings for Zero-Shot Classification [code]
  25. 2016_CVPR_Zero-Shot Learning via Joint Latent Similarity Embedding [code]
  26. 2016_AAAI_Relation Knowledge Transfer for Zero-Shot Learning [code]
  27. 2016_ECCV_An empirical study and analysis of generalized zero-shot learning for object recognition in the wild [code]
  28. 2015_ICML_An embarrassingly simple approach to one-shot learning [code]
  29. 2014_ECCV_Transductive Multi-view Embedding for Zero-Shot Recognition and Annotation [code]
  30. 2013_NIPS_Zero-Shot Learning Through Cross-Modal Transfer [code]
### 大模型学习开源项目代码的方法与训练数据处理 大模型可以通过学习开源项目的代码来提升其性能和泛化能力。这种方法通常涉及多个阶段的数据准备、预处理以及模型微调技术的应用。 #### 数据收集与整理 为了使大模型能够有效学习开源项目中的代码,首先需要构建高质量的代码语料库。这些语料可以从公开可用的开源平台(如GitHub、GitLab等)获取[^1]。具体来说,可以选择特定领域内的代表性项目作为主要数据源,确保所选项目覆盖多种编程语言和技术栈,从而增强模型的理解能力和跨域适应性。 #### 预处理流程 在获得原始代码之后,对其进行必要的清洗和结构化转换至关重要。这一步骤包括但不限于去除无关注释、标准化变量命名规则、提取函数定义及其文档字符串等内容[^2]。此外,还可以利用词法分析器或抽象语法树工具进一步解析源文件,生成更易于被神经网络理解和建模的形式表示。 #### 微调策略 当准备好经过充分加工后的训练集后,则可采用迁移学习方法对已有基础架构实施针对性调整——即所谓fine-tuning过程。在此期间,开发者可以根据实际需求定制损失函数设计,并引入正则项防止过拟合现象发生;同时合理设置超参数组合以达到最佳效果平衡点[^3]。 以下是基于Python实现的一个简单示例程序片段用于展示如何加载并初步探索此类数据: ```python import pandas as pd from sklearn.model_selection import train_test_split # 加载代码样本数据集 df = pd.read_csv('code_samples.csv') # 查看前几条记录 print(df.head()) # 划分训练测试集合 X_train, X_test, y_train, y_test = train_test_split( df['code'], df['label'], test_size=0.2, random_state=42) # 输出划分情况概览 print(f'Training samples: {len(X_train)}') print(f'Testing samples: {len(X_test)}') ``` 此脚本展示了基本操作步骤,但真实应用场景下可能还需要考虑更多细节因素才能取得理想成果。
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值