
毕业论文
文章平均质量分 69
淘淘图兔兔呀
这个作者很懒,什么都没留下…
展开
-
Zero-Shot Multilingual Sentiment Analysis using Hierarchical Attentive Network and BERT
AbstractSentiment analysis is considered an important downstream task in language modelling. We propose Hierarchical Attentive Network using BERT for document sentiment classification. We further showed that importing representation from Multiplicative LS原创 2021-10-21 18:09:08 · 140 阅读 · 0 评论 -
Senti-BSAS: A BERT-based Classification Model with Sentiment Calculating for Happiness Research
AbstractHappiness becomes a rising topic that we all care about recently. It can be described in various forms. For the text content, it is an interesting subject that we can do research on happiness by utilizing natural language processing (NLP) methods.原创 2021-10-21 17:54:02 · 177 阅读 · 0 评论 -
Chinese Sentiment Classification Model based on Pre-Trained BERT
AbstractIn order to solve the problems of low accuracy, less training data and poor training results of traditional machine learning algorithm in Chinese sentient classification task, this paper proposes a Chinese sentient classification model based on p原创 2021-10-21 17:27:54 · 374 阅读 · 0 评论 -
Attention Is All You Need
AbstractThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism.主要的序列转原创 2021-10-21 16:51:54 · 106 阅读 · 0 评论 -
Pre-Training with Whole Word Masking for Chinese BERT
AbstractBidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks. Recently, an upgraded version of BERT has been released with Whole Word Masking (WWM), which mitigate the drawbacks of maskin原创 2021-10-21 16:46:02 · 401 阅读 · 0 评论 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
AbstractBERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.BERT被设计用来预训练未标记文本的深度双向表示,通过联合作用于所有层的左右上下文。As a result, the pre-trained BERT model can be fi原创 2021-10-21 16:24:58 · 134 阅读 · 0 评论