Home >Backend Development >Python Tutorial >How Can NLTK Efficiently Extract Sentences from Text, Handling Complex Linguistic Nuances?
Problem: Obtain a list of sentences from a provided text file, accounting for the complexities of language, such as periods used in abbreviations and numerals.
Inefficient Regular Expression:
re.compile('(\. |^|!|\?)([A-Z][^;↑\.<>@\^&/\[\]]*(\.|!|\?) )',re.M)
Solution Using Natural Language Toolkit (NLTK):
NLTK provides a robust solution for sentence tokenization, as demonstrated by the following code:
import nltk.data # Load the English sentence tokenizer tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') # Read the text file with open("test.txt") as fp: data = fp.read() # Tokenize the text into sentences sentences = tokenizer.tokenize(data) # Print the tokenized sentences, separated by newlines print('\n-----\n'.join(sentences))
Benefits of NLTK Solution:
The above is the detailed content of How Can NLTK Efficiently Extract Sentences from Text, Handling Complex Linguistic Nuances?. For more information, please follow other related articles on the PHP Chinese website!