1、下载相关软件,并解压 版本号如下: (1)apache-nutch-2.2.1 (2) hbase-0.90.4 (3)solr-4.9.0 并解压至/usr/search 2、Nutch的配置 (1)vi /usr/search/apache-nutch-2.2.1/conf/nutch-site.xml propertynamestorage.data.store.class/namevalueorg
1、下载相关软件,并解压
版本号如下:
(1)apache-nutch-2.2.1
(2) hbase-0.90.4
(3)solr-4.9.0
并解压至/usr/search
2、Nutch的配置
(1)vi /usr/search/apache-nutch-2.2.1/conf/nutch-site.xml
<property> <name>storage.data.store.class</name> <value>org.apache.gora.hbase.store.HBaseStore</value> <description>Default class for storing data</description> </property>
(2)vi /usr/search/apache-nutch-2.2.1/ivy/ivy.xml
默认情况下,此语句被注释掉,将其注释符号去掉,使其生效。
<dependency org="org.apache.gora" name="gora-hbase" rev="0.3" conf="*->default"></dependency>
(3)vi /usr/search/apache-nutch-2.2.1/conf/gora.properties
添加以下语句:
gora.datastore.default=org.apache.gora.hbase.store.HBaseStore
以上三个步骤指定了使用HBase来进行存储。
以下步骤才是构建基本Nutch的必要步骤。
(4)构建runtime
cd /usr/search/apache-nutch-2.2.1/
ant runtime
(5)验证Nutch安装完成
[root@jediael44 apache-nutch-2.2.1]# cd /usr/search/apache-nutch-2.2.1/runtime/local/bin/
[root@jediael44 bin]# ./nutch
Usage: nutch COMMAND
where COMMAND is one of:
inject inject new urls into the database
hostinject creates or updates an existing host table from a text file
generate generate new batches to fetch from crawl db
fetch fetch URLs marked during generate
parse parse URLs marked during fetch
updatedb update web table after parsing
updatehostdb update host table after parsing
readdb read/dump records from page database
readhostdb display entries from the hostDB
elasticindex run the elasticsearch indexer
solrindex run the solr indexer on parsed batches
solrdedup remove duplicates from solr
parsechecker check the parser for a given url
indexchecker check the indexing filters for a given url
plugin load a plugin and run one of its classes main()
nutchserver run a (local) Nutch server on a user defined port
junit runs the given JUnit test
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
(6)vi /usr/search/apache-nutch-2.2.1/runtime/local/conf/nutch-site.xml 添加搜索任务
<property> <name>http.agent.name</name> <value>My Nutch Spider</value> </property>
(7)创建seed.txt
cd /usr/search/apache-nutch-2.2.1/runtime/local/bin/
vi seed.txt
http://nutch.apache.org/
(8)修改网页过滤器 vi /usr/search/apache-nutch-2.2.1/conf/regex-urlfilter.txt
vi /usr/search/apache-nutch-2.2.1/conf/regex-urlfilter.txt
将
# accept anything else
+.
修改为
# accept anything else
+^http://([a-z0-9]*\.)*nutch.apache.org/
(9)增加索引内容
默认情况下,schema.xml文件中的core及index-basic中的field才会被索引,为索引更多的field,可以通过以下方式添加。
修改nutch-default.xml,新增以下红色内容
include. Any plugin not matching this expression is excluded.
In any case you need at least include the nutch-extensionpoints plugin. By
default Nutch includes crawling just HTML and plain text via HTTP,
and basic indexing and search plugins. In order to use HTTPS please enable
protocol-httpclient, but be aware of possible intermittent problems with the
underlying commons-httpclient library.
或者可以在nutch-site.xml中添加plugin.includes属性,并将上述内容复制过去。注意,在nutch-site.xml中的属性会代替nutch-default.xml中的属性,因此必须将原有的属性也复制过去。
3、Hbase的配置
(1)vi /usr/search/hbase-0.90.4/conf/hbase-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hbase.rootdir</name> <value><your path></your></value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value><your path></your></value> </property> </configuration>
注:此步骤可不做。若不做,则使用hbase-default.xml(/usr/search/hbase-0.90.4/src/main/resources/hbase-default.xml)中的默认值。
默认值为:
<property> <name>hbase.rootdir</name> <value>file:///tmp/hbase-${user.name}/hbase</value> <description>The directory shared by region servers and into which HBase persists. The URL should be 'fully-qualified' to include the filesystem scheme. For example, to specify the HDFS directory '/hbase' where the HDFS instance's namenode is running at namenode.example.org on port 9000, set this value to: hdfs://namenode.example.org:9000/hbase. By default HBase writes into /tmp. Change this configuration else all data will be lost on machine restart. </description> </property>即默认情况下会放在/tmp目录,若机器重启,有可能数据丢失。
但是建议还是把这些属性做好配置,尤其是第二个关于zoopkeeper的,否则会导致各种问题。以下将目录配置在本地文件系统中。
<configuration> <property> <name>hbase.rootdir</name> <value>file:///home/jediael/hbaserootdir</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>file:///home/jediael/hbasezookeeperdataDir</value> </property> </configuration>
注意,若无前缀file://,则默认是hdfs://
但在0.90.4版本,默认还是本地文件系统。
4、Solr的配置
(1)覆盖solr的schema.xml文件。(对于solr4,应该使用schema-solr4.xml)
cp /usr/search/apache-nutch-2.2.1/conf/schema.xml /usr/search/solr-4.9.0/example/solr/collection1/conf/
(2)若使用solr3.6,则至此已经完成配置,但使用4.9,需要修改以下配置:
修改上述复制过来的schema.xml文件
删除:
增加:
5、启动抓取任务
(1)启动HBase
[root@jediael44 bin]# cd /usr/search/hbase-0.90.4/bin/
[root@jediael44 bin]# ./start-hbase.sh
(2)启动Solr
[root@jediael44 bin]# cd /usr/search/solr-4.9.0/example/
[root@jediael44 example]# java -jar start.jar
(3)启动Nutch,开始抓取任务
[root@jediael44 example]# cd /usr/search/apache-nutch-2.2.1/runtime/local/bin/
[root@jediael44 bin]# ./crawl seed.txt TestCrawl http://localhost:8983/solr 2
大功告成,任务开始执行。
关于上述过程的一些分析请见:
集成Nutch/Hbase/Solr构建搜索引擎之二:内容分析
http://blog.csdn.net/jediael_lu/article/details/37738569
使用crontab来设置Nutch的例行任务时,出现以下错误
JAVA_HOME is not set。
于是创建了一个脚本,用于执行抓取工作:
#!/bin/bash export JAVA_HOME=/usr/java/jdk1.7.0_51 /opt/jediael/apache-nutch-2.2.1/runtime/local/bin/crawl /opt/jediael/apache-nutch-2.2.1/runtime/local/urls/ mainhttp://localhost:8080/solr/ 2 >> ~jediael/nutch.log
然后再配置例行任务
30 0,6,8,10,12,14,16,18,20,22 * * * bash /opt/jediael/apache-nutch-2.2.1/runtime/local/bin/myCrawl.sh

MySQL'sBLOBissuitableforstoringbinarydatawithinarelationaldatabase,whileNoSQLoptionslikeMongoDB,Redis,andCassandraofferflexible,scalablesolutionsforunstructureddata.BLOBissimplerbutcanslowdownperformancewithlargedata;NoSQLprovidesbetterscalabilityand

ToaddauserinMySQL,use:CREATEUSER'username'@'host'IDENTIFIEDBY'password';Here'showtodoitsecurely:1)Choosethehostcarefullytocontrolaccess.2)SetresourcelimitswithoptionslikeMAX_QUERIES_PER_HOUR.3)Usestrong,uniquepasswords.4)EnforceSSL/TLSconnectionswith

ToavoidcommonmistakeswithstringdatatypesinMySQL,understandstringtypenuances,choosetherighttype,andmanageencodingandcollationsettingseffectively.1)UseCHARforfixed-lengthstrings,VARCHARforvariable-length,andTEXT/BLOBforlargerdata.2)Setcorrectcharacters

MySQloffersechar, Varchar, text, Anddenumforstringdata.usecharforfixed-Lengthstrings, VarcharerForvariable-Length, text forlarger text, AndenumforenforcingdataAntegritywithaetofvalues.

Optimizing MySQLBLOB requests can be done through the following strategies: 1. Reduce the frequency of BLOB query, use independent requests or delay loading; 2. Select the appropriate BLOB type (such as TINYBLOB); 3. Separate the BLOB data into separate tables; 4. Compress the BLOB data at the application layer; 5. Index the BLOB metadata. These methods can effectively improve performance by combining monitoring, caching and data sharding in actual applications.

Mastering the method of adding MySQL users is crucial for database administrators and developers because it ensures the security and access control of the database. 1) Create a new user using the CREATEUSER command, 2) Assign permissions through the GRANT command, 3) Use FLUSHPRIVILEGES to ensure permissions take effect, 4) Regularly audit and clean user accounts to maintain performance and security.

ChooseCHARforfixed-lengthdata,VARCHARforvariable-lengthdata,andTEXTforlargetextfields.1)CHARisefficientforconsistent-lengthdatalikecodes.2)VARCHARsuitsvariable-lengthdatalikenames,balancingflexibilityandperformance.3)TEXTisidealforlargetextslikeartic

Best practices for handling string data types and indexes in MySQL include: 1) Selecting the appropriate string type, such as CHAR for fixed length, VARCHAR for variable length, and TEXT for large text; 2) Be cautious in indexing, avoid over-indexing, and create indexes for common queries; 3) Use prefix indexes and full-text indexes to optimize long string searches; 4) Regularly monitor and optimize indexes to keep indexes small and efficient. Through these methods, we can balance read and write performance and improve database efficiency.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Dreamweaver CS6
Visual web development tools

WebStorm Mac version
Useful JavaScript development tools

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),
