search
HomeDatabaseMysql TutorialShare an example of data monitoring using the oplog mechanism in MongoDB

MongoDB's Replication stores write operations through a log. This log is called oplog. The following article mainly introduces you to the relevant information on using the oplog mechanism in MongoDB to achieve quasi-real-time data operation monitoring. What is needed Friends can refer to it, let’s take a look below.

Preface

Recently there is a need to obtain newly inserted data into MongoDB in real time, and the insertion program itself already has a set of processing logic , so it is inconvenient to write related programs directly in the insertion program. Most traditional databases come with this trigger mechanism, but Mongo does not have related functions to use (maybe I don’t know too much, please Correction), of course, there is another point that needs to be implemented in python, so I collected and compiled a corresponding implementation method.

1. Introduction

#First of all, it can be thought that this requirement is actually very similar to the master-slave backup mechanism of the database. Therefore, the main database can be synchronized because there are certain indicators for control. We know that although MongoDB does not have ready-made triggers, it can realize master-slave backup, so we start with its master-slave backup mechanism.

2. OPLOG

First of all, you need to open the mongod daemon in master mode. Use the command line –master, or Configuration fileAdd the master key to true.

At this time, we can see the new collection-oplog in the local system library of Mongo. At this time, the oplog information will be stored in oplog.$main. If this If Mongo exists as a slave database, there will also be some slave information. Since we are not master-slave synchronization here, these sets do not exist.

Let’s take a look at the oplog structure:


"ts" : Timestamp(6417682881216249, 1), 时间戳
"h" : NumberLong(0), 长度
"v" : 2, 
"op" : "n", 操作类型
"ns" : "", 操作的库和集合
"o2" : "_id" update条件
"o" : {} 操作值,即document

You need to know the op here Several attributes:


insert,'i'
update, 'u'
remove(delete), 'd'
cmd, 'c'
noop, 'n' 空操作

As can be seen from the above information, we only need to continuously read ts for comparison, and then judge the current situation based on the op What operation occurs is equivalent to using a program to implement a receiving end from the database.

3. CODE

I found someone else’s implementation on Github, but its function library is too old, so Make modifications based on his work.

Github address: github.com/RedBeard0531/mongo-oplog-watcher

mongo_oplog_watcher.py is as follows:

#!/usr/bin/python
import pymongo
import re
import time
from pprint import pprint # pretty printer
from pymongo.errors import AutoReconnect

class OplogWatcher(object):
  def init(self, db=None, collection=None, poll_time=1.0, connection=None, start_now=True):
    if collection is not None:
      if db is None:
        raise ValueError('must specify db if you specify a collection')
      self._ns_filter = db + '.' + collection
    elif db is not None:
      self._ns_filter = re.compile(r'^%s\.' % db)
    else:
      self._ns_filter = None

    self.poll_time = poll_time
    self.connection = connection or pymongo.Connection()

    if start_now:
      self.start()

  @staticmethod
  def get_id(op):
    id = None
    o2 = op.get('o2')
    if o2 is not None:
      id = o2.get('_id')

    if id is None:
      id = op['o'].get('_id')

    return id

  def start(self):
    oplog = self.connection.local['oplog.$main']
    ts = oplog.find().sort('$natural', -1)[0]['ts']
    while True:
      if self._ns_filter is None: 
        filter = {}
      else:
        filter = {'ns': self._ns_filter}
      filter['ts'] = {'$gt': ts}
      try:
        cursor = oplog.find(filter, tailable=True)
        while True:
          for op in cursor:
            ts = op['ts']
            id = self.get_id(op)
            self.all_with_noop(ns=op['ns'], ts=ts, op=op['op'], id=id, raw=op)
          time.sleep(self.poll_time)
          if not cursor.alive:
            break
      except AutoReconnect:
        time.sleep(self.poll_time)

  def all_with_noop(self, ns, ts, op, id, raw):
    if op == 'n':
      self.noop(ts=ts)
    else:
      self.all(ns=ns, ts=ts, op=op, id=id, raw=raw)

  def all(self, ns, ts, op, id, raw):
    if op == 'i':
      self.insert(ns=ns, ts=ts, id=id, obj=raw['o'], raw=raw)
    elif op == 'u':
      self.update(ns=ns, ts=ts, id=id, mod=raw['o'], raw=raw)
    elif op == 'd':
      self.delete(ns=ns, ts=ts, id=id, raw=raw)
    elif op == 'c':
      self.command(ns=ns, ts=ts, cmd=raw['o'], raw=raw)
    elif op == 'db':
      self.db_declare(ns=ns, ts=ts, raw=raw)

  def noop(self, ts):
    pass

  def insert(self, ns, ts, id, obj, raw, **kw):
    pass

  def update(self, ns, ts, id, mod, raw, **kw):
    pass

  def delete(self, ns, ts, id, raw, **kw):
    pass

  def command(self, ns, ts, cmd, raw, **kw):
    pass

  def db_declare(self, ns, ts, **kw):
    pass

class OplogPrinter(OplogWatcher):
  def all(self, **kw):
    pprint (kw)
    print #newline

if name == 'main':
  OplogPrinter()

First, implement a database Initialization, set a delay time (quasi real-time):


self.poll_time = poll_time
self.connection = connection or pymongo.MongoClient()

The main function is start() , to achieve a time comparison and perform Processing of corresponding fields:


def start(self):
 oplog = self.connection.local['oplog.$main']
 #读取之前提到的库
 ts = oplog.find().sort('$natural', -1)[0]['ts']
 #获取一个时间边际
 while True:
 if self._ns_filter is None:
  filter = {}
 else:
  filter = {'ns': self._ns_filter}
 filter['ts'] = {'$gt': ts}
 try:
  cursor = oplog.find(filter)
  #对此时间之后的进行处理
  while True:
  for op in cursor:
   ts = op['ts']
   id = self.get_id(op)
   self.all_with_noop(ns=op['ns'], ts=ts, op=op['op'], id=id, raw=op)
   #可以指定处理插入监控,更新监控或者删除监控等
  time.sleep(self.poll_time)
  if not cursor.alive:
   break
 except AutoReconnect:
  time.sleep(self.poll_time)

Loop this start function, and write the corresponding monitoring and processing logic here in all_with_noop.

In this way, a simple quasi-real-time Mongodatabase operationmonitor can be implemented. In the next step, the newly entered program can be processed accordingly in conjunction with other operations.

The above is the detailed content of Share an example of data monitoring using the oplog mechanism in MongoDB. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Explain the role of InnoDB redo logs and undo logs.Explain the role of InnoDB redo logs and undo logs.Apr 15, 2025 am 12:16 AM

InnoDB uses redologs and undologs to ensure data consistency and reliability. 1.redologs record data page modification to ensure crash recovery and transaction persistence. 2.undologs records the original data value and supports transaction rollback and MVCC.

What are the key metrics to look for in an EXPLAIN output (type, key, rows, Extra)?What are the key metrics to look for in an EXPLAIN output (type, key, rows, Extra)?Apr 15, 2025 am 12:15 AM

Key metrics for EXPLAIN commands include type, key, rows, and Extra. 1) The type reflects the access type of the query. The higher the value, the higher the efficiency, such as const is better than ALL. 2) The key displays the index used, and NULL indicates no index. 3) rows estimates the number of scanned rows, affecting query performance. 4) Extra provides additional information, such as Usingfilesort prompts that it needs to be optimized.

What is the Using temporary status in EXPLAIN and how to avoid it?What is the Using temporary status in EXPLAIN and how to avoid it?Apr 15, 2025 am 12:14 AM

Usingtemporary indicates that the need to create temporary tables in MySQL queries, which are commonly found in ORDERBY using DISTINCT, GROUPBY, or non-indexed columns. You can avoid the occurrence of indexes and rewrite queries and improve query performance. Specifically, when Usingtemporary appears in EXPLAIN output, it means that MySQL needs to create temporary tables to handle queries. This usually occurs when: 1) deduplication or grouping when using DISTINCT or GROUPBY; 2) sort when ORDERBY contains non-index columns; 3) use complex subquery or join operations. Optimization methods include: 1) ORDERBY and GROUPB

Describe the different SQL transaction isolation levels (Read Uncommitted, Read Committed, Repeatable Read, Serializable) and their implications in MySQL/InnoDB.Describe the different SQL transaction isolation levels (Read Uncommitted, Read Committed, Repeatable Read, Serializable) and their implications in MySQL/InnoDB.Apr 15, 2025 am 12:11 AM

MySQL/InnoDB supports four transaction isolation levels: ReadUncommitted, ReadCommitted, RepeatableRead and Serializable. 1.ReadUncommitted allows reading of uncommitted data, which may cause dirty reading. 2. ReadCommitted avoids dirty reading, but non-repeatable reading may occur. 3.RepeatableRead is the default level, avoiding dirty reading and non-repeatable reading, but phantom reading may occur. 4. Serializable avoids all concurrency problems but reduces concurrency. Choosing the appropriate isolation level requires balancing data consistency and performance requirements.

MySQL vs. Other Databases: Comparing the OptionsMySQL vs. Other Databases: Comparing the OptionsApr 15, 2025 am 12:08 AM

MySQL is suitable for web applications and content management systems and is popular for its open source, high performance and ease of use. 1) Compared with PostgreSQL, MySQL performs better in simple queries and high concurrent read operations. 2) Compared with Oracle, MySQL is more popular among small and medium-sized enterprises because of its open source and low cost. 3) Compared with Microsoft SQL Server, MySQL is more suitable for cross-platform applications. 4) Unlike MongoDB, MySQL is more suitable for structured data and transaction processing.

How does MySQL index cardinality affect query performance?How does MySQL index cardinality affect query performance?Apr 14, 2025 am 12:18 AM

MySQL index cardinality has a significant impact on query performance: 1. High cardinality index can more effectively narrow the data range and improve query efficiency; 2. Low cardinality index may lead to full table scanning and reduce query performance; 3. In joint index, high cardinality sequences should be placed in front to optimize query.

MySQL: Resources and Tutorials for New UsersMySQL: Resources and Tutorials for New UsersApr 14, 2025 am 12:16 AM

The MySQL learning path includes basic knowledge, core concepts, usage examples, and optimization techniques. 1) Understand basic concepts such as tables, rows, columns, and SQL queries. 2) Learn the definition, working principles and advantages of MySQL. 3) Master basic CRUD operations and advanced usage, such as indexes and stored procedures. 4) Familiar with common error debugging and performance optimization suggestions, such as rational use of indexes and optimization queries. Through these steps, you will have a full grasp of the use and optimization of MySQL.

Real-World MySQL: Examples and Use CasesReal-World MySQL: Examples and Use CasesApr 14, 2025 am 12:15 AM

MySQL's real-world applications include basic database design and complex query optimization. 1) Basic usage: used to store and manage user data, such as inserting, querying, updating and deleting user information. 2) Advanced usage: Handle complex business logic, such as order and inventory management of e-commerce platforms. 3) Performance optimization: Improve performance by rationally using indexes, partition tables and query caches.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft