个人技术栈构建

部署运行你感兴趣的模型镜像

Resources

Deep Learning

Natural Language Processing

Reinforcement Learning

Machine Learning


Natural Language Processing 学习

接下来成为专家的领域:Transformer + TL -> 知识图谱 -> DRL

Topiccs224nslp3CS11-747others
nlp basics: math and optimizerslecture 0
Word Vectorslecture 1: Introduction and Word Vectors
lecture 2: Word Vectors 2 and Word Senses
lecture 12: Information from parts of words: Subword Models
chapter 6: Vector SemanticsDistributional Semantics and Word Vectors (1/22/2019)ruder.io/word-embeddings
Neural Networkslecture 3: Word Window Classification, Neural Networks, and Matrix Calculus
lecture 4: Backpropagation and Computation Graphs
chapter 7: Neural Networks and Neural LMNeural Networks and Deep Learning
RNN and Language Modelslecture 6: Recurrent Neural Networks and Language Models
lecture 7: Vanishing Gradients, Fancy RNNs
chapter 9: Sequence Processing with Recurrent NetworksA Simple (?) Exercise: Predicting the Next Word in a Sentence (1/17/2019)
Recurrent Networks for Sentence or Language Modeling (1/29/2019)
Recurrent Neural Networks Tutorial, Part 1–Introduction to RNNs
Understanding LSTM Networks
The Unreasonable Effectiveness of Recurrent Neural Networks
seq2seq+Attentionlecture 8: Machine Translation, Seq2Seq and Attentionchapter 22: Machine TranslationConditioned Generation (2/5/2019)
Attention (2/7/2019)
https://github.com/tensorflow/nmt
https://arxiv.org/abs/1703.01619
CNN for Textlecture 11: ConvNets for NLPConvolutional Neural Nets for Text (1/24/2019)
Contextuallecture 13: contexts of use: Contextual Representations and PretrainingSentence and Contextual Word Representations (2/12/2019)http://jalammar.github.io/illustrated-bert/
✨***Transformer***✨Transformers and Self-Attention For Generative Modelshttps://jalammar.github.io/illustrated-transformer/
http://nlp.seas.harvard.edu/2018/04/03/attention.html
Dependency Parsinglecture 5: Linguistic Structure: Dependency Parsingchapter 8: Part-of-Speech Tagging
Structured Prediction ModelsSearch-based Structured Prediction (2/19/2019)
Reinforcement Learning (2/21/2019)
Structured Prediction with Local Independence Assumptions (2/26/2019)
Advanced Learning TechniquesLatent Random Variables (3/5/2019)
Adversarial Methods for Text (3/7/2019)
Unsupervised and Semi-supervised Learning of Structure (3/28/2019)
✨Models of Knowledge and Context✨Reference in Language and Coreference ResolutionModels of Dialog (4/2/2019)
Document-level Models (4/4/2019)
Learning from/for Knowledge Graphs (4/9/2019)
Machine Reading w/ Neural Nets (4/16/2019)
Multi-task and Multilingual LearningMultitask Learning: A general model for NLP?Multi-task Multi-lingual Learning Models (4/18/2019)
Multimodal Models (4/23/2019)

您可能感兴趣的与本文相关的镜像

Stable-Diffusion-3.5

Stable-Diffusion-3.5

图片生成
Stable-Diffusion

Stable Diffusion 3.5 (SD 3.5) 是由 Stability AI 推出的新一代文本到图像生成模型,相比 3.0 版本,它提升了图像质量、运行速度和硬件效率

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值