Home  >  Article  >  Backend Development  >  In-depth analysis of PHP's function of automatically obtaining and generating article topic keywords_PHP Tutorial

In-depth analysis of PHP's function of automatically obtaining and generating article topic keywords_PHP Tutorial

WBOY
WBOYOriginal
2016-07-21 15:09:56849browse

I have been avoiding this problem when writing programs in the past. People who use the program are required to input tags themselves. For some lazy people and for the sake of program experience, they hope to automatically generate article keywords and automatically obtain articles. Similar functions of tag, this time in order to prepare for a new project, I tinkered with it all night and studied this function.
To realize the function of automatically obtaining keywords, it can be roughly divided into three steps
1. Use the word segmentation algorithm to segment the title and content respectively, and extract the keywords and frequency.
The two main algorithms currently are ICTCLAS and Hidden Markov Model of the Chinese Academy of Sciences. But both of them are too high-end, have certain thresholds, and both only support C++/JAVA. There are currently two recommended PSCWS and HTTPCWS based on PHP. SCWS released the official version 1.0.0 on 2008-03-08, and now the latest version has reached 1.0.4. PSCWS is its PHP version. HTTPCWS was developed by Zhang Yan and was previously called PHPCWS. PHPCWS first uses the API of "ICTCLAS 3.0 Shared Edition Chinese Word Segmentation Algorithm" for initial word segmentation processing, and then uses the self-written "Reverse Maximum Matching Algorithm" to segment and merge words, and adds a punctuation filtering function to obtain the word segmentation results. Unfortunately, it currently only supports Linux systems and has not yet been ported to the win platform.
2. Compare the extraction results with the existing lexicon, process, and remove useless words to obtain the keywords that best comply with the rules. The main thing here is to look at the thesaurus. We can define the thesaurus ourselves, or we can use the existing mature thesaurus. For example, Sina and NetEase blogs have this function. They should have a good word library for word segmentation, because they are all big websites, and as for me, a small programmer, it is impossible to get any authoritative thesaurus, so I can only start with existing open source programs and take a look. their lexicon.
3. Select the appropriate one as the final keyword from the processed extraction results to get the keyword that best fits the current content . At this stage, it is a detailed analysis of the specific situation. In any case, It is possible to achieve human-like intelligence. At most. Currently, all PHP CMSs have their own keyword extraction systems.
The word segmentation source code of DEDECMS is currently the most widely circulated on the Internet. I did a test and found that it is quite dull and the effect is very poor. It first sets a keyword length, determines the number of keywords to obtain, and then fetches the words. It believes that the words divided into the title are the required keywords, and then reads the keywords from the text to only reach the set length. , is the final keyword. In addition, meaningless words such as "we" are not removed and are listed as keywords too frequently. Sometimes HTML with spaces is even extracted as keywords, which needs improvement. But as an auxiliary function, it is already very good. Discuz is slightly better, but discuz does not provide source code, only an online API.
There are also several versions of dede's word segmentation. The best one should be the latest version. The frequency of occurrence is everything. Let's compare the results of dede5.7's word segmentation and discuz's api.
Test example:
$title="THINKPHP official will stop supporting version 2.0";
$body="In order to better develop and maintain the ThinkPHP framework and support work, the official announced that from May 1, 2012, the maintenance and support for 2.0 and previous versions will be carried out. For the sake of energy saving and low carbon, the corresponding version and document downloads from the official website will also be cancelled.
In memory of those years, ThinkPHP version that we have developed together!
About ThinkPHP 2.0 version
ThinkPHP was born in 2006 and is committed to the rapid development of WEB applications. Its 2.0 version was released on October 1, 2009. Completed new reconstruction and leaps on the previous 1.* version. It was an epoch-making version at that time, which laid the foundation for the new version. At the same time, it also accumulated a large number of user groups and websites. With the rapid update of the framework, and the new version The successive releases of versions 2.1, 2.2 and 3.0 herald the arrival of the 3.0 era of ThinkPHP, and the life cycle of 2.0 has come to an end. But basically many functions of 2.0 have been continued or improved in version 2.1, and have been upgraded from version 2.0 to 2.1 and 2.0. Version 2.2 is also relatively easy. Version 2.2 is the final version of version 2.*. It will no longer update functions and only fix bugs.";
1. Dede participle
The results are sorted as follows
Title Array (
[THINKPHP] => 1
[Official ] => 1
[Coming soon] => 1
[Stop] => 1
[Right] => 1
[2.0] => 1
[Version ] => 1
[of] => 1
[Support] => 1
)
Content Array (
[Version] => 12
[of ] => 12
[And] => 8
[ThinkPHP] => 5
[2.0] => 5
[Also] => 3
[2.2 ] => 3
[2.1] => 3
[Development] => 3
[3.0] => 2
[Yes] => 2
[Quick ] => 2
[To] => 2
[Released] => 2
[Maintenance] => 2
[Before] => 2
[Up ] => 2
[New version] => 2
[Support] => 2
[Framework] => 2
[At the same time] => 2
[From ] => 2
How to extract the final required keywords? The initial idea is to remove the words "of" and "some" first, and then follow the sorting order of the content to see whether they appear in the title. appears in is required, so that a certain amount of words can be extracted as the final keyword. As shown above, we can get the
version thinkphp 2.0 to support the stop
five keywords. It seems that the result is acceptable. 🎜>
2. When looking at discuz, what you get by using the api is an xml document. The keywords obtained after parsing are
, fast, version upgrade, development, User
has five words, the first one is "的"...
Comparing the two methods, it is found that the first dede + subsequent processing is closer to the content of the document, and should be slightly better, while discuz deviates from the topic of the article, but the words it picks up are quite popular

http://www.bkjia.com/PHPjc/327239.htmlwww.bkjia.comtruehttp: //www.bkjia.com/PHPjc/327239.htmlTechArticleI used to avoid this problem when writing programs. People who use the program were required to input tags and other things by themselves. For a certain For lazy people and for the sake of program experience, they hope to have automatically generated text...
Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn