- 博客(24)
- 资源 (1)
- 收藏
- 关注
原创 单点车流量与饱和度的计算思考
饱和度计算公式:sat:饱和度v:平均车速d(v):车速为v情况下的安全车距(车距+车身长,平均值)l:车道数f:监测流量(车/min)
2023-11-16 11:33:24
891
原创 论文记录:关于排序
在网页搜索排序中,关于排序目标,特征选择,嵌入方法等,业界都积累了大量经验。ListNet: ListNet是微软n年前提出的基于列表的排序框架,主要是针对排序问题是针对顺序,而非具体文档得分进行了loss构建【即:一个query的结果列表,文档的相对顺序是想要保证的,而每个文档的具体得分是5, 4,3, 2,1还是5, 4.5, 3.5, 3,1.5却不是我们关心的。】。因此构建了一个quer...
2019-10-11 10:54:44
249
原创 论文记录:Matching Networks for One Shot Learning
思路:通过点之间的距离来进行分类特点:对待分类点x^进行嵌入时,为了使其与标注样本(support set)的嵌入更一致【从而更好的进行距离的比较】,使用了LSTM节点+residual by-pass ,进行嵌入向量的信息取舍,并对该输出向量,采用attention model,用support set中点的向量,重构(线性组合)出新的嵌入表示。这个过程会迭代多次K(which is t...
2019-09-29 12:41:31
224
原创 论文记录:Prototypical Networks for Few-shot Learning
思路:为每个类别构造一个原型表示(prototypical representation)embedding。 原型表示就是每个类一个向量。构造的方法:取support set中每个点的embeddings, 取平均(mean)点embedding:Regular embedding methods,GoogLenet embedding for images, etc. LSTM + ...
2019-09-27 16:05:33
759
原创 论文记录:Few-Shot Text Classification with Induction Network
针对问题: 对话系统中的意图识别,因为意图会很多,样本量比较少,因此考虑few-shot的解决方案。总体架构:1、encode: text sequence embedding。 Bi-LSTM + Attention ( softmax(M2 * tanh{ M1 * V{1:H} }))...
2019-09-27 13:01:42
651
原创 论文记录:Reasoning with neural tensor networks for Knowledge Base completion
论文记录:Reasoning with neural tensor networks for Knowledge Base completionPaper pointsNeural tensor networksEntity embedding AS average word embeddingsPretrained word embedding vectors used in initializ...
2019-09-24 22:36:20
610
原创 论文记录:IRGAN:A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
本论文使用GAN的思路应用于IR问题传统IR建模路径:生成式根据查询内容,生成文档内容。主要抓取的是内容的相似性信息。因此,对文本内容的把握及其重要,在文本内容相关的问题上产生了各种度量内容相似性的监督或非监督指标,包括TF、IDF、BM25、LDA聚类、boolean、BoW等。判别式根据相关性需求,突破了查询与文档的内容相似性,广泛引入例如用户点击反馈、网页跳转链接等非内容...
2019-09-10 13:02:40
841
原创 论文记录:A Survey on Dialogue Systems: Recent Advances and New Frontiers
综述:对话系统对话系统分类:面向特定任务的系统面向非特定任务的系统面向特定任务(Task-oriented dialogue system)的对话系统的总体架构(pipeline)NLU-> Dialogue state tracking -> Policy learning -> NLGNLU: map natural language into seman...
2019-09-07 12:22:58
318
原创 论文记录:LINE: Large-scale Information Network Embedding
Graph embeddingIdea:first-order proximityDirectly connected pair of nodes share similarity, which is true for some social-network-like applications.second-order proximityNodes with overlapping ...
2019-09-05 11:19:58
276
原创 论文记录:BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION
BiDAFChar-CNN: Convolutional Neural Networks for Word Embedding in Sentence. Which cover local words embeddings to generate next-stage embedding. Multiple filters can be used to represent differen...
2019-08-30 18:54:49
205
原创 论文记录:DeepWalk: Online Learning of Social Representations
Graph embeddingRandom walk + word2vec(Skip gram)Random walk to generate(sample) sequences as sequential samples for node embedding training. It avoids sparsity problem of sequence sample set.Word2...
2019-08-30 15:03:47
124
原创 论文记录:Translation-based Recommendation
Inspired by graph embedding, which models transition(edge) embedding as a vector.Basic embedding relation, where i, j are previous and next item embedding, u is user embedding.In order to solve co...
2019-08-30 12:11:34
984
原创 论文记录:Towards Knowledge-Based Recommender Dialog System
End to end knowledge-based, recommendation dialog system.Recommendation system: CF(collaborative filtering), neural networks, etc.In this paper, item embedding and user embedding is used.Item embe...
2019-08-27 13:12:20
967
原创 论文记录:Pointer Networks
Pointer networksAttention model applied to unlimited-vocabulary-output seq-2-seq problems which selects output tokens from input.It takes attention scores to do pointer selection, makes OOV(out of v...
2019-08-26 16:06:29
182
原创 论文记录:A Decomposable Attention Model for Natural Language Inference
一个简洁高效的textual entailment model特点:模型小,效果好思路清晰方法:交叉使用attention model进行信息互嵌(AM)使用self-attention(BERT, transformer) + relative-position embedding(XLnet) 进行contextual embedding将自嵌与互嵌的vector进行co...
2019-08-22 15:11:53
265
原创 论文记录:Visualizing and Understanding the Effectiveness of BERT
BERT pre-training and fine-tune 与 train from scratch的对比结论:Pre-training Gets a Good Initial Point Across Downstream TasksPre-training Leads to Wider OptimaPre-training Eases Optimization on Down...
2019-08-22 11:45:42
329
原创 论文记录:NLP表达可视化
参考论文:Are All Layers Created EqualVisualizing and Understanding the Effectiveness of BERTVisualizing and Measuring the Geometry of BERTAre All Layers Created Equal通过对DNN各种网络结构的各层参数进行re-initializ...
2019-08-22 11:12:23
649
原创 论文记录:Learning Natural Language Inference with LSTM
Learning Natural Language Inference with LSTM.pdf论文思路创新点论文思路该论文针对text entailment(文本蕴含),通过AM(attention model)内嵌前提P(premise)内容到假设H(hypothesis)中,并对H进行LSTM编码,用于预测类型结果(entailment,contradiction,neutral)。...
2019-08-22 10:08:28
587
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人