Home  >  Q&A  >  body text

mongodb还有分表的必要吗?

mongodb自带了autosharding,那么还有必要分表吗,加入一张表过亿级别?

天蓬老师天蓬老师2749 days ago1743

reply all(4)I'll reply

  • 给我你的怀抱

    给我你的怀抱2017-05-02 09:28:37

    Well, I downvoted it because this answer is misleading readers.
    No matter what kind of database, the cost of creating an index is huge, because it means traversing the data in the entire table, how can it be possible without a lot of pressure? That’s why there are {background: true} options that can appropriately alleviate this situation. When creating an index in a cluster that is under too much pressure, we recommend using the "rolling" index creation method, which removes the nodes one by one to create the index and then puts it online to avoid affecting the operation of the online system.
    As for the lock issue, starting from 3.0, the WT engine supports document locks (row locks).
    The cost of querying the index is huge. It is probably because your index is not established properly. You can give specific examples to discuss.
    When the data exceeds 100 million, please give specific examples to discuss the pitfalls.

    reply
    0
  • 天蓬老师

    天蓬老师2017-05-02 09:28:37

    Mongodb supports automatic sharding and partitioning architecture, which can be used to build a horizontally scalable database cluster system and store database tables on each sharding node.

    Please see mongodb sharding instead of database sharding [1]: https://yq.aliyun.com/article...

    reply
    0
  • 给我你的怀抱

    给我你的怀抱2017-05-02 09:28:37

    Tables can be divided by month Name_03 Name_04 The name remains unchanged. The name of the table to be queried is dynamically changed according to the timestamp in the program

    reply
    0
  • PHPz

    PHPz2017-05-02 09:28:37

    I personally think it is still necessary. MongoDB has the problem of locking the database (lower version) and locking the table (medium version). Even if there are shards for storing files, the overhead of index creation and index search is still huge! ! ! ! When the data exceeds 100 million, there are still many pitfalls!

    reply
    0
  • Cancelreply