[TOC]
Part-of-speech tagger
Many subsequent tasks require tagged words. nltk comes with its own English tagger
pos_tag
import nltk text = nltk.word_tokenize("And now for something compleyely difference")print(text)print(nltk.pos_tag(text))
annotation corpus
represents the tagged identifier: nltk.tag.str2tuple('word/type')
text = "The/AT grand/JJ is/VBD ."print([nltk.tag.str2tuple(t) for t in text.split()])
Read the annotated corpus
nltk corpus ue navel provides unification interface, you can ignore different file formats. Format:
Corpus.tagged_word()/tagged_sents()
. Parameters can specify categories and fields
print(nltk.corpus.brown.tagged_words())
Nouns, verbs, adjectives, etc.
Here we take nouns as an example
from nltk.corpus import brown word_tag = nltk.FreqDist(brown.tagged_words(categories="news"))print([word+'/'+tag for (word,tag)in word_tag if tag.startswith('V')])################下面是查找money的不同标注#################################wsj = brown.tagged_words(categories="news") cfd = nltk.ConditionalFreqDist(wsj)print(cfd['money'].keys())
Try to find the most frequent noun of each noun type
def findtag(tag_prefix,tagged_text): cfd = nltk.ConditionalFreqDist((tag,word) for (word,tag) in tagged_text if tag.startswith(tag_prefix))return dict((tag,list(cfd[tag].keys())[:5]) for tag in cfd.conditions())#数据类型必须转换为list才能进行切片操作tagdict = findtag('NN',nltk.corpus.brown.tagged_words(categories="news"))for tag in sorted(tagdict):print(tag,tagdict[tag])
Explore the annotated corpus
Required
nltk.bigrams()
andnltk.trigrams()
correspond to the 2-gram model and the 3-gram model respectively.
brown_tagged = brown.tagged_words(categories="learned") tags = [b[1] for (a,b) in nltk.bigrams(brown_tagged) if a[0]=="often"] fd = nltk.FreqDist(tags) fd.tabulate()
Automatic tagging
Default tagger
The simplest tagger is for each identifier Assign uniform tags. Below is a tagger that turns all words into NN. And use
evaluate()
to test. It facilitates first analysis and improves stability when many words are nouns.
brown_tagged_sents = brown.tagged_sents(categories="news") raw = 'I do not like eggs and ham, I do not like them Sam I am'tokens = nltk.word_tokenize(raw) default_tagger = nltk.DefaultTagger('NN')#创建标注器print(default_tagger.tag(tokens)) # 调用tag()方法进行标注print(default_tagger.evaluate(brown_tagged_sents))
Regular expression tagger
Note that the rules here are fixed (determined by yourself). As the rules become more and more complete, the accuracy becomes higher.
patterns = [ (r'.*ing$','VBG'), (r'.*ed$','VBD'), (r'.*es$','VBZ'), (r'.*','NN')#为了方便,只有少量规则] regexp_tagger = nltk.RegexpTagger(patterns) regexp_tagger.evaluate(brown_tagged_sents)
Query annotator
There is a difference between this and the book. It is different from python2. Pay attention to debugging. The query tagger stores the most likely tags, and the
backoff
parameter can be set. If the tag cannot be marked, use this tagger (this process is backoff)
fd = nltk.FreqDist(brown.words(categories="news")) cfd = nltk.ConditionalFreqDist(brown.tagged_words(categories="news"))##############################################python2和3的区别#########most_freq_words = fd.most_common(100) likely_tags = dict((word,cfd[word].max()) for (word,times) in most_freq_words)#######################################################################baseline_tagger = nltk.UnigramTagger(model=likely_tags,backoff=nltk.DefaultTagger('NN')) baseline_tagger.evaluate(brown_tagged_sents)
N-gram annotation
Basic unary annotator
The behavior of the unary annotator and the search annotator Very similar to the techniques used to build unary annotators, train for .
Here our annotator only memorizes the training set instead of building a general model. The agreement is very good, but it cannot be generalized to new texts.
size = int(len(brown_tagged_sents)*0.9) train_sents = brown_tagged_sents[:size] test_sents = brown_tagged_sents[size+1:] unigram_tagger = nltk.UnigramTagger(train_sents) unigram_tagger.evaluate(test_sents)
General N-gram tagger
N-gram tagger is to retrieve the word with index= n, and retrieve n-N< ;=indexnltk.UnigramTagger(), the built-in binary tagger is:
nltk.BigramTagger()
The usage is consistent.
Combined Tagger
Many times, an algorithm with wider coverage is more useful than an algorithm with higher accuracy. Use
backoff
to specify the backoff annotator to implement the combination of annotators. If the parametercutoff
is explicitly declared as int type, contexts that only appear 1-n times will be automatically discarded.
t0 = nltk.DefaultTagger('NN') t1 = nltk.UnigramTagger(train_sents,backoff=t0) t2 = nltk.BigramTagger(train_sents,backoff=t1) t2.evaluate(test_sents)
It can be found that after comparing with the original, the accuracy is significantly improved
Cross-sentence boundary marking
For the beginning of the sentence of words, there are no first n words. Solution: Train the tagger with tagged tagged_sents.
Transformation-based annotation: Brill annotator
is superior to the above. The idea of implementation: start with a big stroke, then fix the details, and make detailed changes little by little.
Not only does it take up a small amount of memory, but it is also associated with the context, and corrects errors in real time as the problem becomes smaller, rather than static. Of course, the calls are different in python3 and python2.
from nltk.tag import brill brill.nltkdemo18plus() brill.nltkdemo18()
The above is the detailed content of NLTK learning: classifying and annotating vocabulary. For more information, please follow other related articles on the PHP Chinese website!

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python and C have significant differences in memory management and control. 1. Python uses automatic memory management, based on reference counting and garbage collection, simplifying the work of programmers. 2.C requires manual management of memory, providing more control but increasing complexity and error risk. Which language to choose should be based on project requirements and team technology stack.

Python's applications in scientific computing include data analysis, machine learning, numerical simulation and visualization. 1.Numpy provides efficient multi-dimensional arrays and mathematical functions. 2. SciPy extends Numpy functionality and provides optimization and linear algebra tools. 3. Pandas is used for data processing and analysis. 4.Matplotlib is used to generate various graphs and visual results.

Whether to choose Python or C depends on project requirements: 1) Python is suitable for rapid development, data science, and scripting because of its concise syntax and rich libraries; 2) C is suitable for scenarios that require high performance and underlying control, such as system programming and game development, because of its compilation and manual memory management.

Python is widely used in data science and machine learning, mainly relying on its simplicity and a powerful library ecosystem. 1) Pandas is used for data processing and analysis, 2) Numpy provides efficient numerical calculations, and 3) Scikit-learn is used for machine learning model construction and optimization, these libraries make Python an ideal tool for data science and machine learning.

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Dreamweaver CS6
Visual web development tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Zend Studio 13.0.1
Powerful PHP integrated development environment