search
HomeBackend DevelopmentPython TutorialIntroduction to Tokenization and WordNet Basics with Python and NLTK

Introdução à Tokenização e Básicos do WordNet com Python e NLTK

Natural language processing (NLP) is a fascinating field that combines linguistics and computing to understand, interpret, and manipulate human language. One of the most powerful tools for this is the Natural Language Toolkit (NLTK) in Python. In this text, we will explore the concepts of tokenization and the use of WordNet, a lexical base for the English language, which is widely used in NLP.

What is Tokenization?

Tokenization is the process of dividing text into smaller units, called tokens. These tokens can be words, phrases, or even individual characters. Tokenization is a crucial step in text processing because it allows algorithms to understand and analyze text more effectively.

For example, consider the phrase "Hello, world!". Tokenizing this sentence can result in three tokens: ["Hello", "," "world", "!"]. This division allows each part of the text to be analyzed individually, facilitating tasks such as sentiment analysis, machine translation and named entity recognition.

In NLTK, tokenization can be done in several ways. Let's see some practical examples.

Tokenizing Text in Sentences

Dividing text into sentences is the first step in many NLP tasks. NLTK makes this easy with the sent_tokenize function.

import nltk
from nltk.tokenize import sent_tokenize

texto = "Olá mundo! Bem-vindo ao tutorial de NLTK. Vamos aprender a tokenizar textos."
sentencas = sent_tokenize(texto, language='portuguese')
print(sentencas)
e

The result will be:

['Olá mundo!', 'Bem-vindo ao tutorial de NLTK.', 'Vamos aprender a tokenizar textos.']
e

Here, the text was divided into three sentences. This is useful for more detailed analysis where each sentence can be processed individually.

Tokenizing Sentences into Words

After dividing the text into sentences, the next step is usually to divide these sentences into words. NLTK's word_tokenize function is used for this.

from nltk.tokenize import word_tokenize

frase = "Olá mundo!"
palavras = word_tokenize(frase, language='portuguese')
print(palavras)
e

The result will be:

['Olá', 'mundo', '!']
e

Now we have each word and punctuation symbol as separate tokens. This is essential for tasks like word frequency analysis, where we need to count how many times each word appears in a text.

Using Regular Expressions for Tokenization

In some cases, you may want more personalized tokenization. Regular expressions (regex) are a powerful tool for this. NLTK provides the RegexpTokenizer class to create custom tokenizers.

from nltk.tokenize import RegexpTokenizer

tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize("Vamos aprender NLTK.")
print(tokens)
e

The result will be:

['Vamos', 'aprender', 'NLTK']
e

Here, we use a regular expression that selects only words made up of alphanumeric characters, ignoring punctuation.

Introduction to WordNet

WordNet is a lexical database that groups words into sets of synonyms called synsets, provides short, general definitions, and records various semantic relationships between these words. In NLTK, WordNet is used to find synonyms, antonyms, hyponyms and hypernyms, among other relationships.

To use WordNet, we need to import the wordnet module from NLTK.

from nltk.corpus import wordnet
e

Searching for Synsets

A synset, or set of synonyms, is a group of words that share the same meaning. To search for the synsets of a word, we use the synsets function.

sinonimos = wordnet.synsets("dog")
print(sinonimos)
e

The result will be a list of synsets that represent different meanings of the word "dog".

[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]
e

Each synset is identified by a name that includes the word, the part of speech (n for noun, v for verb, etc.), and a number that distinguishes different senses.

Definitions and Examples

We can get the definition and usage examples of a specific synset.

sinonimo = wordnet.synset('dog.n.01')
print(sinonimo.definition())
print(sinonimo.examples())
e

The result will be:

a domesticated carnivorous mammal (Canis familiaris) that typically has a long snout, an acute sense of smell, non-retractile claws, and a barking, howling, or whining voice
['the dog barked all night']
e

This gives us a clear understanding of the meaning and use of "dog" in this context.

Searching Synonyms and Antonyms

To find synonyms and antonyms of a word, we can explore synset lemmas.

sinonimos = []
antonimos = []

for syn in wordnet.synsets("good"):
    for lemma in syn.lemmas():
        sinonimos.append(lemma.name())
        if lemma.antonyms():
            antonimos.append(lemma.antonyms()[0].name())

print(set(sinonimos))
print(set(antonimos))
e

The result will be a list of synonyms and antonyms for the word "good".

{'skillful', 'proficient', 'practiced', 'unspoiled', 'goodness', 'good', 'dependable', 'sound', 'right', 'safe', 'respectable', 'effective', 'trade_good', 'adept', 'good', 'full', 'commodity', 'estimable', 'honorable', 'undecomposed', 'serious', 'secure', 'dear', 'ripe'}
{'evilness', 'evil', 'ill'}
e

Calculating Semantic Similarity

WordNet also allows you to calculate the semantic similarity between words. Similarity is based on the distance between synsets in the hyponym/hypernym graph.

from nltk.corpus import wordnet

cachorro = wordnet.synset('dog.n.01')
gato = wordnet.synset('cat.n.01')
similaridade = cachorro.wup_similarity(gato)
print(similaridade)
e

The result will be a similarity value between 0 and 1.

0.8571428571428571
e

This value indicates that "dog" and "cat" are quite similar semantically.

Filtrando Stopwords

Stopwords são palavras comuns que geralmente não adicionam muito significado ao texto, como "e", "a", "de". Remover essas palavras pode ajudar a focar nas partes mais importantes do texto. O NLTK fornece uma lista de stopwords para várias línguas.

from nltk.corpus import stopwords

stop_words = set(stopwords.words('portuguese'))
palavras = ["Olá", "mundo", "é", "um", "lugar", "bonito"]
palavras_filtradas = [w for w in palavras if not w in stop_words]
print(palavras_filtradas)

O resultado será:

['Olá', 'mundo', 'lugar', 'bonito']

Aqui, as stopwords foram removidas da lista original de palavras.

Aplicações Práticas

Análise de Sentimentos

A análise de sentimentos é uma aplicação comum de PLN onde o objetivo é determinar a opinião ou emoção expressa em um texto. Tokenização e o uso de WordNet são passos importantes nesse processo.

Primeiro, dividimos o texto em palavras e removemos as stopwords. Em seguida, podemos usar os synsets para entender melhor o contexto e a polaridade das palavras.

texto = "Eu amo programação em Python!"
palavras = word_tokenize(texto, language='portuguese')
palavras_filtradas = [w for w in palavras if not w in stop_words]

polaridade = 0
for palavra in palavras_filtradas:
    synsets = wordnet.synsets(palavra, lang='por')
    if synsets:
        for syn in synsets:
            polaridade += syn.pos_score() - syn.neg_score()

print("Polaridade do texto:", polaridade)

Nesse exemplo simplificado, estamos somando os scores positivos e negativos dos synsets das palavras filtradas para determinar a polaridade geral do texto.

Reconhecimento de Entidades Nomeadas

Outra aplicação é o reconhecimento de entidades nomeadas (NER), que identifica e classifica nomes de pessoas, organizações, locais, etc., em um texto.

import nltk
nltk.download('maxent_ne_chunker')
nltk.download('words')

frase = "Barack Obama foi o 44º presidente dos Estados Unidos."
palavras = word_tokenize(frase, language='portuguese')
tags = nltk.pos_tag(palavras)
entidades = nltk.ne_chunk(tags)
print(entidades)

O resultado será uma árvore que identifica "Barack Obama" como uma pessoa e "Estados Unidos" como um local.

Conclusão

Neste texto, exploramos os conceitos básicos de tokenização e uso do WordNet com a biblioteca NLTK em Python. Vimos como dividir textos em sentenças e palavras, como buscar sinônimos e antônimos, calcular similaridades semânticas, e aplicações práticas como análise de sentimentos e reconhecimento de entidades nomeadas. A NLTK é uma ferramenta poderosa para qualquer pessoa interessada em processamento de linguagem natural, oferecendo uma ampla gama de funcionalidades para transformar e analisar textos de forma eficaz.

The above is the detailed content of Introduction to Tokenization and WordNet Basics with Python and NLTK. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Python vs. C  : Understanding the Key DifferencesPython vs. C : Understanding the Key DifferencesApr 21, 2025 am 12:18 AM

Python and C each have their own advantages, and the choice should be based on project requirements. 1) Python is suitable for rapid development and data processing due to its concise syntax and dynamic typing. 2)C is suitable for high performance and system programming due to its static typing and manual memory management.

Python vs. C  : Which Language to Choose for Your Project?Python vs. C : Which Language to Choose for Your Project?Apr 21, 2025 am 12:17 AM

Choosing Python or C depends on project requirements: 1) If you need rapid development, data processing and prototype design, choose Python; 2) If you need high performance, low latency and close hardware control, choose C.

Reaching Your Python Goals: The Power of 2 Hours DailyReaching Your Python Goals: The Power of 2 Hours DailyApr 20, 2025 am 12:21 AM

By investing 2 hours of Python learning every day, you can effectively improve your programming skills. 1. Learn new knowledge: read documents or watch tutorials. 2. Practice: Write code and complete exercises. 3. Review: Consolidate the content you have learned. 4. Project practice: Apply what you have learned in actual projects. Such a structured learning plan can help you systematically master Python and achieve career goals.

Maximizing 2 Hours: Effective Python Learning StrategiesMaximizing 2 Hours: Effective Python Learning StrategiesApr 20, 2025 am 12:20 AM

Methods to learn Python efficiently within two hours include: 1. Review the basic knowledge and ensure that you are familiar with Python installation and basic syntax; 2. Understand the core concepts of Python, such as variables, lists, functions, etc.; 3. Master basic and advanced usage by using examples; 4. Learn common errors and debugging techniques; 5. Apply performance optimization and best practices, such as using list comprehensions and following the PEP8 style guide.

Choosing Between Python and C  : The Right Language for YouChoosing Between Python and C : The Right Language for YouApr 20, 2025 am 12:20 AM

Python is suitable for beginners and data science, and C is suitable for system programming and game development. 1. Python is simple and easy to use, suitable for data science and web development. 2.C provides high performance and control, suitable for game development and system programming. The choice should be based on project needs and personal interests.

Python vs. C  : A Comparative Analysis of Programming LanguagesPython vs. C : A Comparative Analysis of Programming LanguagesApr 20, 2025 am 12:14 AM

Python is more suitable for data science and rapid development, while C is more suitable for high performance and system programming. 1. Python syntax is concise and easy to learn, suitable for data processing and scientific computing. 2.C has complex syntax but excellent performance and is often used in game development and system programming.

2 Hours a Day: The Potential of Python Learning2 Hours a Day: The Potential of Python LearningApr 20, 2025 am 12:14 AM

It is feasible to invest two hours a day to learn Python. 1. Learn new knowledge: Learn new concepts in one hour, such as lists and dictionaries. 2. Practice and exercises: Use one hour to perform programming exercises, such as writing small programs. Through reasonable planning and perseverance, you can master the core concepts of Python in a short time.

Python vs. C  : Learning Curves and Ease of UsePython vs. C : Learning Curves and Ease of UseApr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools