Home  >  Article  >  Java  >  Keyword extraction algorithm and application examples implemented in Java

Keyword extraction algorithm and application examples implemented in Java

WBOY
WBOYOriginal
2023-06-18 12:14:013844browse

Keyword extraction algorithm and application examples implemented in Java

With the advent of the Internet era, massive text data has caused great difficulties for people to obtain and analyze, so keyword extraction is required. Research and application of natural language processing technology. Keyword extraction refers to extracting words or phrases from a piece of text that best represent the topic of the text, providing support for tasks such as text classification, retrieval, and clustering. This article introduces several keyword extraction algorithms and application examples implemented in Java.

1. TF-IDF algorithm

TF-IDF is a commonly used algorithm for extracting keywords from text. It is based on the frequency of word occurrence in the text and the frequency of occurrence in the entire corpus. Frequency, weighting calculation for words. TF represents the frequency of a word in the current text, and IDF represents the inverse document frequency of a word in the entire corpus. The calculation formula is as follows:

TF = (number of occurrences of a word in the text) / (total number of words in the text) )

IDF = log(total number of documents in the corpus / number of documents containing the word)

TF-IDF = TF * IDF

Java code implementation:

public Map<String, Double> tfIdf(List<String> docs) {
    Map<String, Integer> wordFreq = new HashMap<>();
    int totalWords = 0;
    for (String doc : docs) {
        String[] words = doc.split(" ");
        for (String word : words) {
            wordFreq.put(word, wordFreq.getOrDefault(word, 0) + 1);
            totalWords++;
        }
    }
    Map<String, Double> tfIdf = new HashMap<>();
    int docSize = docs.size();
    for (String word : wordFreq.keySet()) {
        double tf = (double) wordFreq.get(word) / totalWords;
        int docCount = 0;
        for (String doc : docs) {
            if (doc.contains(word)) {
                docCount++;
            }
        }
        double idf = Math.log((double) docSize / (docCount + 1));
        tfIdf.put(word, tf * idf);
    }
    return tfIdf;
}

2. TextRank Algorithm

TextRank is a graph-based algorithm used for text keyword extraction and summary extraction. It uses the co-occurrence relationship of words to construct a graph and evaluates the importance of words in the graph. Ranking is performed based on gender, and high-ranking words are identified as keywords or important sentences. The core idea of ​​TextRank is the PageRank algorithm, which regards word co-occurrence relationships as links between pages, sorts words, and obtains keywords in the text. The calculation process of the TextRank algorithm includes the following steps:

1. Extract words or phrases in the text;
2. Establish a word co-occurrence graph and use co-occurrence relationships to represent edges;
3 , Sort the words and calculate the PageRank value of each word;
4. Select the top-ranked words as keywords based on the PageRank value.

Java code implementation:

public List<String> textrank(List<String> docs, int numKeywords) {
    List<String> sentences = new ArrayList<>();
    for (String doc : docs) {
        sentences.addAll(Arrays.asList(doc.split("[。?!;]")));
    }
    List<String> words = new ArrayList<>();
    for (String sentence : sentences) {
        words.addAll(segment(sentence));
    }
    Map<String, Integer> wordFreq = new HashMap<>();
    Map<String, Set<String>> wordCooc = new HashMap<>();
    for (String word : words) {
        wordFreq.put(word, wordFreq.getOrDefault(word, 0) + 1);
        wordCooc.put(word, new HashSet<>());
    }
    for (String sentence : sentences) {
        List<String> senWords = segment(sentence);
        for (String w1 : senWords) {
            if (!wordFreq.containsKey(w1)) {
                continue;
            }
            for (String w2 : senWords) {
                if (!wordFreq.containsKey(w2)) {
                    continue;
                }
                if (!w1.equals(w2)) {
                    wordCooc.get(w1).add(w2);
                    wordCooc.get(w2).add(w1);
                }
            }
        }
    }
    Map<String, Double> wordScore = new HashMap<>();
    for (String word : words) {
        double score = 1.0;
        for (String coocWord : wordCooc.get(word)) {
            score += wordScore.getOrDefault(coocWord, 1.0) / wordCooc.get(coocWord).size();
        }
        wordScore.put(word, score);
    }
    List<Map.Entry<String, Double>> sortedWords =
            wordScore.entrySet().stream()
                     .sorted(Collections.reverseOrder(Map.Entry.comparingByValue()))
                     .collect(Collectors.toList());
    List<String> keywords = new ArrayList<>();
    for (int i = 0; i < numKeywords && i < sortedWords.size(); i++) {
        keywords.add(sortedWords.get(i).getKey());
    }
    return keywords;
}

private List<String> segment(String text) {
    // 使用中文分词器分词
    // TODO
    return Arrays.asList(text.split(" "));
}

3. LDA topic model

LDA is a probabilistic topic model that can treat the text as a mixture of multiple topics and perform text analysis on the text. Topic classification and keyword extraction. The LDA topic model treats the words in the text as a probability distribution, where each word can be assigned to multiple topics. The LDA topic model needs to specify the number of topics and the number of iterations, and then solve it through the EM algorithm to obtain the word distribution of each topic and the topic distribution of each text.

Java code implementation:

public List<String> lda(List<String> docs, int numTopics,
                        int numKeywords, int iterations) {
    List<List<String>> words = new ArrayList<>();
    for (String doc : docs) {
        words.add(segment(doc));
    }
    Dictionary dictionary = new Dictionary(words);
    Corpus corpus = new Corpus(dictionary);
    for (List<String> docWords : words) {
        Document doc = new Document(dictionary);
        for (String word : docWords) {
            doc.addWord(new Word(word));
        }
        corpus.addDocument(doc);
    }
    LdaGibbsSampler sampler = new LdaGibbsSampler(corpus, numTopics, 0.5, 0.1);
    sampler.gibbs(iterations);
    List<String> keywords = new ArrayList<>();
    for (int i = 0; i < numTopics; i++) {
        List<WordProbability> wordProbs = sampler.getSortedWordsByWeight(i);
        for (int j = 0; j < numKeywords && j < wordProbs.size(); j++) {
            keywords.add(wordProbs.get(j).getWord().getName());
        }
    }
    return keywords;
}

private List<String> segment(String text) {
    // 使用中文分词器分词
    // TODO
    return Arrays.asList(text.split(" "));
}

Application examples

Keyword extraction can be applied to text classification, summary extraction, search engine ranking and other fields. The following are application examples based on the above algorithm.

1. News Classification

Given the text of some news reports, the TF-IDF algorithm can be used to extract the keywords of each text, and then the machine learning algorithm can be used for classification. For example, a decision tree algorithm can be used to classify news, and keywords can be input into the decision tree as features. The classification effect can be evaluated through methods such as cross-validation.

2. Summary extraction

Given the text of an article, you can use the TextRank algorithm to extract the key sentences and combine them into a summary. Abstract extraction can be applied to automatic summarization, search engine display and other fields.

3. Scientific and technological literature search

In scientific and technological literature retrieval, the user usually enters a keyword or keyword combination, and then the search engine calculates the matching degree between the document and the keyword through the TF-IDF algorithm. , and sorted according to matching degree, allowing users to quickly find relevant documents. In addition, combined with the LDA topic model, documents can be classified into topics, and topic keywords can be used as search input to improve search results.

Conclusion

This article introduces several keyword extraction algorithms and application examples implemented in Java. The TF-IDF algorithm is one of the most commonly used algorithms in text processing. The TextRank algorithm can extract key sentences, and the LDA topic model can classify text topics. These algorithms can be applied to document classification, automatic summarization, search engine ranking and other fields, and have broad application prospects.

The above is the detailed content of Keyword extraction algorithm and application examples implemented in Java. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn