search
HomeDatabaseMysql TutorialMongoDB的日常维护管理

MongoDB的日常维护管理

Jun 07, 2016 pm 03:57 PM
mongodbmaindailymanagemaintainrun

主要介绍了日常运行维护的管理工具 MongoDB的日常维护包括使用配置文件,设置访问控制,Shell交互,系统监控和管理,数据库日常备份和恢复 启动和停止MongoDB 启动后可以通过数据库的IP加端口号访问Web形式数据库。配置文件 通过使用拂去配置文件的方式启动

主要介绍了日常运行维护的管理工具

MongoDB的日常维护包括使用配置文件,设置访问控制,Shell交互,系统监控和管理,数据库日常备份和恢复

启动和停止MongoDB

启动后可以通过数据库的IP加端口号访问Web形式数据库。配置文件
通过使用拂去配置文件的方式启动数据库实例,在bin文件夹下创建并编辑mongodb.config(名字可以随意)
事例加上 dbpath =/data/db/
启动时加上 --f 参数,并且指向配置文件即可。
使用Daemon方式启动
为什么我们使用Daemon方式?当我们关闭数据库服务的session端口的时候,MongoDB的服务也随之终止,这样是十分不安全的。通过守护进程的方式,启动即可。
添加 --fork 参数,这里必须指定存储日志的文件,即为启动 --logpath 参数。
事例如下
./mongod.exe --dbpath = D:\MongoDB  --logpath = D:\MongoDB\log\mongodb50.log   --fork
常见的mongod的参数说明
dbpath:数据文件存放路径
logpath: 存放的日志文件
bind_ip :对外的服务绑定IP,一般为空,面对所有的IP开放
port: fork 以后台Daemon的形式启动该服务,web管理端在其上加1000
journal: 开启日志功能,通过保存操作日志来降低单机故障的恢复时间,
config :当参数行十分多的时候,使用这个参数来设定参数文件的位置
关闭数据库
直接使用Control+C来中断
在connect连接状态下,可以切换到admin数据,后直接在库中发送db.shutdownServer()指令终止MongoDB实例。
Unix下发送Kill -2 PID 或者 Kill -15 PID来终止进程
 ps aux|grep mongod
    kill -2 (yourPID)
    ps aux|grep mongod

注意:不能使用kill -9 PID 杀死进程,这样可能导致MongoDB数据库损坏。

访问数据库

绑定iP地址 ——bind_ip
//MongoDB 可以限制只允许某一特定IP 来访问,只要在启动时加一个参数bind_ip 即可,如下:
[root@mongodb01 /home/mongo/mongodb-2.0.2/bin]$ ./mongod --bind_ip 192.168.1.61
设置监听端口 ——port
//将服务端监听端口修改为27018:
[root@mongodb01 /home/mongo/mongodb-2.0.2/bin]$ ./mongod --bind_ip 192.168.1.61 --port 27018
//(报错代码)端户访问时不指定端口,会连接到默认端口27017,对于本例会报错,代码如下:
[root@mongodb01 /home/mongo/mongodb-2.0.2/bin]$ ./mongo 192.168.1.50
MongoDB shell version: 2.0.2
connecting to: 192.168.1.61/test
Sun Apr 14 21:45:26 Error: couldn't connect to server 192.168.1.50 shell/mongo.js:81 exception: connect failed
使用用户名和密码登陆 ——启动时使用--auth参数
//先启用系统的登录验证模块, 只需在启动时指定 auth 参数即可,代码如下:
[root@mongodb01 /home/mongo/mongodb-2.0.2/bin]$ ./mongod --auth
启动认证

默认有个admin数据库,在admin.system.users中保存的用户比其他的数据库设置的用户权限更大。在未添加admin.system.users用户的权限的的情况下, 客户端无需任何认证就可以连接到数据库,并且可以对数据库进行任何操作,只有在admin.system.users添加了用户,启动--auth参数才会起作用。

1.建立系统root用户

>db.addUser("root","123456")
>db.auth("root","123456")

2.建立只读权限用户

>db.addUser("user_reader","1234567",true)

添加只读权限的用户只需添加第三个参数,true。

使用命令行操作

MongoDB不仅可以交互,还可以执行指定的JavaScript文件,执行指定的命令片段,使用Linux Shell。

1.通过eval参数执行指定的语句

查询test库的t1集合的记录有多少:

db.t1.find()

{ "_id" : ObjectId("4f8ac746b2764d3c3e2cafbb"), "num" : 1 }

{ "_id" : ObjectId("4f8ac74cb2764d3c3e2cafbc"), "num" : 2 }

{ "_id" : ObjectId("4f8ac74eb2764d3c3e2cafbd"), "num" : 3 }

{ "_id" : ObjectId("4f8ac751b2764d3c3e2cafbe"), "num" : 4 }

{ "_id" : ObjectId("4f8ac755b2764d3c3e2cafbf"), "num" : 5 }

db.t1.count()

5

通过使用--eval参数直接执行ti的集合中的数

$./mongo.exe  --eval  "printjson(db.t1.count())"
MongoDB shell version: 2.0.2
connecting to: test
5

2.使用js文件执行文件内容

$cat t1_count.js
var count = db.t1.count();
printjson('count of t1 is: '+count);

显示为:

 $./mongo t1_count.js
MongoDB shell version: 2.0.2
connecting to: test
"count of t1 is: 5"

Tips:通过--quiet参数屏蔽部分登陆信息,使结果更清晰

$ ./mongo --quiet t1_count.js
"count of t1 is: 5"

进程管理

查看活动进程
> db.currentOp()
>db.$cmd.sys.inprog.findOne()   //$cmd调用外部函数

显示如下:

> db.currentOp()
{
        "inprog" : [
                {
                        "opid" : 630385,  
                        "active" : true,
                        "lockType" : "read",
                        "waitingForLock" : false,
                        "secs_running" : 0,
                        "op" : "query",
                        "ns" : "test",
                        "query" : {
                                "count" : "t1",
                                "query" : {

                                },
                                "fields" : {

                                }
                        },
                        "client" : "127.0.0.1:51324",
                        "desc" : "conn",
                        "threadId" : "0x7f066087f710",
                        "connectionId" : 7,
                        "numYields" : 0
                }
        ]
}
>

代码解释:

opid:操作进程号
op: 操作类型(query ,update ,etc)
ns: 命名空间(namespace),操作对象
query :显示操作的具体内容
lockType: 锁的类型,指明是写锁还是读锁
结束进程
> db.killOp(630385)
{ "info" : "attempting to kill op" }

我们查看下:

> db.currentOp()
{ "inprog" : [ ] }
>

监控系统的状态和性能

使用serverStatus命令可以获取到运行中的MongoDB服务器统计信息,下面我们来执行命令,查看MongoDB服务器的统计信息(不同平台或不同版本的键会有所不同),代码如下:

> db.runCommand({"serverStatus":1})
{
        "host" : "lindenpatservermongodb01",
        "version" : "2.0.2",
        "process" : "mongod",
        "uptime" : 6003,
        "uptimeEstimate" : 5926,
        "localTime" : ISODate("2012-04-15T11:02:21.795Z"),
        "globalLock" : {
                "totalTime" : 6002811172,
                "lockTime" : 24867,
                "ratio" : 0.000004142559092311891,
                "currentQueue" : {
                        "total" : 0,
                        "readers" : 0,
                        "writers" : 0
                },
                "activeClients" : {
                        "total" : 0,
                        "readers" : 0,
                        "writers" : 0
                }
        },
        "mem" : {
                "bits" : 64,
                "resident" : 52,
                "virtual" : 1175,
                "supported" : true,
                "mapped" : 160,
                "mappedWithJournal" : 320
        },
        "connections" : {
                "current" : 1,
                "available" : 818
        },
        "extra_info" : {
                "note" : "fields vary by platform",
                "heap_usage_bytes" : 341808,
                "page_faults" : 14
        },
        "indexCounters" : {
                "btree" : {
                        "accesses" : 1,
                        "hits" : 1,
                        "misses" : 0,
                        "resets" : 0,
                        "missRatio" : 0
                }
        },
        "backgroundFlushing" : {
                "flushes" : 100,
                "total_ms" : 13,
                "average_ms" : 0.13,
                "last_ms" : 1,
                "last_finished" : ISODate("2012-04-15T11:02:19.010Z")
        },
        "cursors" : {
                "totalOpen" : 0,
                "clientCursors_size" : 0,
                "timedOut" : 0
        },
        "network" : {
                "bytesIn" : 1729666458,
                "bytesOut" : 1349989344,
                "numRequests" : 21093517
        },
        "opcounters" : {
                "insert" : 5,
                "query" : 8,
                "update" : 0,
                "delete" : 0,
                "getmore" : 0,
                "command" : 21093463
        },
        "asserts" : {
                "regular" : 0,
                "warning" : 0,
                "msg" : 0,
                "user" : 0,
                "rollovers" : 0
        },
        "writeBacksQueued" : false,
        "dur" : {
                "commits" : 30,
                "journaledMB" : 0,
                "writeToDataFilesMB" : 0,
                "compression" : 0,
                "commitsInWriteLock" : 0,
                "earlyCommits" : 0,
                "timeMs" : {
                        "dt" : 3073,
                        "prepLogBuffer" : 0,
                        "writeToJournal" : 0,
                        "writeToDataFiles" : 0,
                        "remapPrivateView" : 0
                }
        },
        "ok" : 1
}
>
数据导出与导入 mongoexportmongoinport

使用mongoexport导出数据

先看数据:

> db.t1.find()
{ "_id" : ObjectId("4f8ac746b2764d3c3e2cafbb"), "num" : 1 }
{ "_id" : ObjectId("4f8ac74cb2764d3c3e2cafbc"), "num" : 2 }
{ "_id" : ObjectId("4f8ac74eb2764d3c3e2cafbd"), "num" : 3 }
{ "_id" : ObjectId("4f8ac751b2764d3c3e2cafbe"), "num" : 4 }
{ "_id" : ObjectId("4f8ac755b2764d3c3e2cafbf"), "num" : 5 }
>

使用代码:
./mongoexport.exe -d test -c t1 -o t1.dat
connected to: 127.0.0.1
exported 5 records
[root@mongodb01 /home/mongo/mongodb-2.0.2/bin]$ ls
bsondump  mongodump    mongoimport   mongosniff  t1_count.js
mongo     mongoexport  mongorestore  mongostat   t1.dat
mongod    mongofiles   mongos        mongotop    testfiles.txt
[root@mongodb01 /home/mongo/mongodb-2.0.2/bin]$ cat t1.dat
{ "_id" : ObjectId("4f8ac746b2764d3c3e2cafbb"), "num" : 1 }
{ "_id" : ObjectId("4f8ac74cb2764d3c3e2cafbc"), "num" : 2 }
{ "_id" : ObjectId("4f8ac74eb2764d3c3e2cafbd"), "num" : 3 }
{ "_id" : ObjectId("4f8ac751b2764d3c3e2cafbe"), "num" : 4 }
{ "_id" : ObjectId("4f8ac755b2764d3c3e2cafbf"), "num" : 5 }

./mongoexport.exe -d test -c t1 -o t1.dat 使用参数说明

-d: 指明使用的数据库
-c: 指明导出的集合,这里是t1
-o: 导出的数据名,这里的数据名默认使用相对路径,也可以使用绝对路径。

导出的数据格式的是JSON方式,也可导出csv格式;

导出为CSV格式代码文件如下:

./mongoexport -d test -c t1 -csv -f num -o t1.dat
-csv:指明了要导出的是CSV格式
-f: 指明需要导出的是哪些例子

数据导入 mongoimport

现将ti删除:

> db.t1.drop()
true
> show collections
system.indexes
system.users
>

再导入数据,这里导入的是csv数据:

./mongoimport -d test -c t1 --type csv --headerline -file /home/t1.dat
connected to: 127.0.0.1

看看导入的数据是不是一样:

> db.t1.find()
{ "_id" : ObjectId("4f8ac746b2764d3c3e2cafbb"), "num" : 1 }
{ "_id" : ObjectId("4f8ac74cb2764d3c3e2cafbc"), "num" : 2 }
{ "_id" : ObjectId("4f8ac74eb2764d3c3e2cafbd"), "num" : 3 }
{ "_id" : ObjectId("4f8ac751b2764d3c3e2cafbe"), "num" : 4 }
{ "_id" : ObjectId("4f8ac755b2764d3c3e2cafbf"), "num" : 5 }
>
--type csv 导入的数据格式为CSV,为什么导入CSV格式:CSV对各大主流的数据库支持更良好,而JSON作为轻量级的数据格式,还有些弊端。
--file 指明导入的文件路径

数据备份和恢复

使用 数据备份 mongodump
./mongodump -d test -o /home/dump
-o:表示输出的备份路径,如果没有使用这个选项的话,MongoDB会自动创建dump文件夹并将备份文件放于其内。
使用数据恢复 mongorestore

mongorestore获取mongodump的输出结果,并将备份的数据插入到运行的MongoDB中。

./mongorestore -d test dump/*
connected to: 127.0.0.1
Thu Apr 19 18:16:12 dump/test/system.users.bson
Thu Apr 19 18:16:12      going into namespace [test.system.users]
2 objects found
Thu Apr 19 18:16:12 dump/test/t1.bson
Thu Apr 19 18:16:12      going into namespace [test.t1]
5 objects found
Thu Apr 19 18:16:12 dump/test/system.indexes.bson
Thu Apr 19 18:16:12      going into namespace [test.system.indexes]
Thu Apr 19 18:16:12 { key: { _id: 1 }, ns: "test.system.users", name: "_id_" }
Thu Apr 19 18:16:13 { key: { _id: 1 }, ns: "test.t1", name: "_id_" }
2 objects found
Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How do you handle database upgrades in MySQL?How do you handle database upgrades in MySQL?Apr 30, 2025 am 12:28 AM

The steps for upgrading MySQL database include: 1. Backup the database, 2. Stop the current MySQL service, 3. Install the new version of MySQL, 4. Start the new version of MySQL service, 5. Recover the database. Compatibility issues are required during the upgrade process, and advanced tools such as PerconaToolkit can be used for testing and optimization.

What are the different backup strategies you can use for MySQL?What are the different backup strategies you can use for MySQL?Apr 30, 2025 am 12:28 AM

MySQL backup policies include logical backup, physical backup, incremental backup, replication-based backup, and cloud backup. 1. Logical backup uses mysqldump to export database structure and data, which is suitable for small databases and version migrations. 2. Physical backups are fast and comprehensive by copying data files, but require database consistency. 3. Incremental backup uses binary logging to record changes, which is suitable for large databases. 4. Replication-based backup reduces the impact on the production system by backing up from the server. 5. Cloud backups such as AmazonRDS provide automation solutions, but costs and control need to be considered. When selecting a policy, database size, downtime tolerance, recovery time, and recovery point goals should be considered.

What is MySQL clustering?What is MySQL clustering?Apr 30, 2025 am 12:28 AM

MySQLclusteringenhancesdatabaserobustnessandscalabilitybydistributingdataacrossmultiplenodes.ItusestheNDBenginefordatareplicationandfaulttolerance,ensuringhighavailability.Setupinvolvesconfiguringmanagement,data,andSQLnodes,withcarefulmonitoringandpe

How do you optimize database schema design for performance in MySQL?How do you optimize database schema design for performance in MySQL?Apr 30, 2025 am 12:27 AM

Optimizing database schema design in MySQL can improve performance through the following steps: 1. Index optimization: Create indexes on common query columns, balancing the overhead of query and inserting updates. 2. Table structure optimization: Reduce data redundancy through normalization or anti-normalization and improve access efficiency. 3. Data type selection: Use appropriate data types, such as INT instead of VARCHAR, to reduce storage space. 4. Partitioning and sub-table: For large data volumes, use partitioning and sub-table to disperse data to improve query and maintenance efficiency.

How can you optimize MySQL performance?How can you optimize MySQL performance?Apr 30, 2025 am 12:26 AM

TooptimizeMySQLperformance,followthesesteps:1)Implementproperindexingtospeedupqueries,2)UseEXPLAINtoanalyzeandoptimizequeryperformance,3)Adjustserverconfigurationsettingslikeinnodb_buffer_pool_sizeandmax_connections,4)Usepartitioningforlargetablestoi

How to use MySQL functions for data processing and calculationHow to use MySQL functions for data processing and calculationApr 29, 2025 pm 04:21 PM

MySQL functions can be used for data processing and calculation. 1. Basic usage includes string processing, date calculation and mathematical operations. 2. Advanced usage involves combining multiple functions to implement complex operations. 3. Performance optimization requires avoiding the use of functions in the WHERE clause and using GROUPBY and temporary tables.

An efficient way to batch insert data in MySQLAn efficient way to batch insert data in MySQLApr 29, 2025 pm 04:18 PM

Efficient methods for batch inserting data in MySQL include: 1. Using INSERTINTO...VALUES syntax, 2. Using LOADDATAINFILE command, 3. Using transaction processing, 4. Adjust batch size, 5. Disable indexing, 6. Using INSERTIGNORE or INSERT...ONDUPLICATEKEYUPDATE, these methods can significantly improve database operation efficiency.

Steps to add and delete fields to MySQL tablesSteps to add and delete fields to MySQL tablesApr 29, 2025 pm 04:15 PM

In MySQL, add fields using ALTERTABLEtable_nameADDCOLUMNnew_columnVARCHAR(255)AFTERexisting_column, delete fields using ALTERTABLEtable_nameDROPCOLUMNcolumn_to_drop. When adding fields, you need to specify a location to optimize query performance and data structure; before deleting fields, you need to confirm that the operation is irreversible; modifying table structure using online DDL, backup data, test environment, and low-load time periods is performance optimization and best practice.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function