stanford-postagger-full词性标注

本文介绍了Stanford POSTagger的多个预训练模型,如arabic.tagger、chinese-distsim.tagger等,详细说明了它们的训练数据和性能。此外,还详细阐述了如何通过命令行和API调用来使用该词性标注工具,包括不同选项的使用场景和具体命令示例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1 models介绍

该版本的词性标注工具中有一个models文件夹,该文件夹下有两种类型的文件:.tagger类型和. props类型。其中.tagger类型的文件是词性标注训练出来的模型文件,. props类型是其对应的properties文件。models文件夹下所有的文件如下图:


下面分别介绍该文件夹下主要的几个模型。

1.1  arabic.tagger

Trainedon the *entire* ATB p1-3.

Whentrained on the train part of the ATB p1-3 split done for the 2005

JHUSummer Workshop (Diab split), using (augmented) Bies tags, it gets

thefollowing performance:

96.26% ontest portion according to Diab split

(80.14%on unknown words)

1.2 chinese-distsim.tagger

Trained on a combination of CTB7 texts fromChinese and Hong Kong

sources with distributional similarityclusters.

LDC Chinese Treebank POS tag set.

Performance:

9

About A Part-Of-Speech Tagger (POS Tagger) is a piece of software that reads text in some language and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc., although generally computational applications use more fine-grained POS tags like 'noun-plural'. This software is a Java implementation of the log-linear part-of-speech taggers described in these papers (if citing just one paper, cite the 2003 one): Kristina Toutanova and Christopher D. Manning. 2000. Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000), pp. 63-70. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network. In Proceedings of HLT-NAACL 2003, pp. 252-259. The tagger was originally written by Kristina Toutanova. Since that time, Dan Klein, Christopher Manning, William Morgan, Anna Rafferty, Michel Galley, and John Bauer have improved its speed, performance, usability, and support for other languages. The system requires Java 1.6+ to be installed. Depending on whether you're running 32 or 64 bit Java and the complexity of the tagger model, you'll need somewhere between 60 and 200 MB of memory to run a trained tagger (i.e., you may need to give java an option like java -mx200m). Plenty of memory is needed to train a tagger. It again depends on the complexity of the model but at least 1GB is usually needed, often more. Several downloads are available. The basic download contains two trained tagger models for English. The full download contains three trained English tagger models, an Arabic tagger model, a Chinese tagger model, and a German tagger model. Both versions include the same source and other required files. The tagger can be retrained on any language, given POS-annotated training text for the language.
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值