Home >Backend Development >PHP Tutorial >PHP realizes the function of automatically obtaining and generating keywords_PHP tutorial
Those who use the program are required to enter tags themselves. For some lazy people and for the sake of the program experience, they hope to have a similar function that automatically generates article keywords and automatically obtains article tags. This time, in order to welcome It's a new project, so I tinkered with it all night and studied this function.
To realize the function of automatically obtaining keywords, it can be roughly divided into three steps
1. Use the word segmentation algorithm to segment the title and content respectively, and extract keywords and frequencies. The two main algorithms currently are ICTCLAS and Hidden Markov Model of the Chinese Academy of Sciences. But both of them are too high-end, have certain thresholds, and both only support C++/JAVA. There are currently two recommended PSCWS and HTTPCWS based on PHP. SCWS released the official version 1.0.0 on 2008-03-08, and now the latest version has reached 1.0.4. PSCWS is its PHP version. HTTPCWS was developed by Zhang Yan and was previously called PHPCWS. PHPCWS first uses the API of "ICTCLAS 3.0 Shared Edition Chinese Word Segmentation Algorithm" for initial word segmentation processing, and then uses the self-written "Reverse Maximum Matching Algorithm" to segment and merge words, and adds a punctuation filtering function to obtain the word segmentation results. Unfortunately, it currently only supports Linux systems and has not yet been ported to the win platform.
2. Compare the extraction results with the existing lexicon, process, and remove useless words to obtain the keywords that best comply with the rules. The main thing here is to look at the lexicon. We can define the lexicon ourselves, or we can use an existing mature thesaurus. For example, Sina and NetEase blogs have this function. They should have a good word library for word segmentation, because they are all big websites, and I, a mere programmer, can't get any authoritative thesaurus, so I can only start with existing open source programs and take a look. their lexicon.
3. Select the appropriate keywords from the processed extraction results as the final keywords to obtain the keywords that best fit the current content. At this stage, the specific situation is analyzed in detail. In any case, it is impossible to achieve human intelligence. At most. Currently, all PHP CMSs have their own keyword extraction systems.
The word segmentation source code of DEDECMS is currently the most widely circulated on the Internet. I did a test and found that it is quite dull and the effect is very poor. It first sets a keyword length, determines the number of keywords to obtain, and then fetches the words. It believes that the words divided into the title are the required keywords, and then reads the keywords from the text to only reach the set length. , is the final keyword. In addition, meaningless words such as "we" are not removed and are listed as keywords too frequently. Sometimes HTML with spaces is even extracted as keywords, which needs improvement. But as an auxiliary function, it is already very good. Discuz is slightly better, but discuz does not provide source code, only an online API.
There are also several versions of dede's word segmentation. The best one should be the latest version. The frequency of occurrence is all there. Let's compare the results of dede5.7's word segmentation and discuz's api
Test example:
$title="THINKPHP will officially stop supporting version 2.0";
$body="In order to better develop, maintain and support the ThinkPHP framework, the official announced that from May 1, 2012, maintenance and support for 2.0 and previous versions will be cancelled. In order to save energy and low carbon, it will also be cancelled. Download the corresponding version and documentation from the official website
.
Let’s remember the ThinkPHP version we developed together over those years!
About ThinkPHP version 2.0
ThinkPHP was born in 2006 and is committed to the rapid development of WEB applications. Its 2.0 version was released on October 1, 2009. It completed new reconstruction and leaps on the previous 1.* version. At that time, it was an epoch-making version. The new version has laid the foundation and has also accumulated a large number of user groups and websites. With the rapid update of the framework and the successive releases of new versions 2.1, 2.2 and 3.0, it heralds the arrival of the 3.0 era of ThinkPHP and the announcement of the 2.0 life cycle. Finish. But basically many functions of 2.0 have been continued or improved in version 2.1, and it is relatively easy to upgrade from version 2.0 to versions 2.1 and 2.2. Version 2.2 is the final version of version 2.*. It will no longer update functions and only fix bugs. ";
1. Dede participle
The results are sorted as follows
Title Array
(
[THINKPHP] => 1
[Official] => 1
[Coming soon] => 1
[Stop] => 1
[pair] => 1
[2.0] => 1
[Version] => 1
[of] => 1
[Support] => 1
)
ContentArray
(
[Version] => 12
[of] => 12
[And] => 8
[ThinkPHP] => 5
[2.0] => 5
[Also] => 3
[2.2] => 3
[2.1] => 3
[Development] => 3
[3.0] => 2
[Yes] => 2
[Quick] => 2
[To] => 2
[Publish] => 2
[Maintenance] => 2
[Before] => 2
[了] => 2
[New version] => 2
[Support] => 2
[Frame] => 2
[At the same time] => 2
[from] => 2
*******
How to extract the final required keywords? The initial idea is to first remove the words "of" and "some", and then sort the content according to the order of content, and see if they appear in the title. In this way, a certain amount of words can be taken out as the final keywords. As the above results we can get
Version thinkphp 2.0 supported Stop
Five keywords. It seems the results are acceptable.
2. When looking at discuz, what you get by using the api is an xml document, and the keywords you get after parsing are
’s, fast, version upgrade, development, user
Five words, the first one is "的"......
Comparing the two methods, it is found that the first dede+ subsequent processing is closer to the content of the document and should be slightly better, while discuz deviates from the topic of the article, but the words it retrieves have a certain popularity.