찾다
데이터 베이스MySQL 튜토리얼Efficient Indexing in MongoDB 2.6

By Osmar Olivo, Product Manager at MongoDB One of the most powerful features of MongoDB is its rich indexing functionality. Users can specify secondary indexes on any field, compound indexes, geospatial, text, sparse, TTL, and others. Havi

By Osmar Olivo, Product Manager at MongoDB

One of the most powerful features of MongoDB is its rich indexing functionality. Users can specify secondary indexes on any field, compound indexes, geospatial, text, sparse, TTL, and others. Having extensive indexing functionality makes it easier for developers to build apps that provide rich functionality and low latency.

MongoDB 2.6 introduces a new query planner, including the ability to perform index intersection. Prior to 2.6 the query planner could only make use of a single index for most queries. That meant that if you wanted to query on multiple fields together, you needed to create a compound index. It also meant that if there were several different combinations of fields you wanted to query on, you might need several different compound indexes.

Each index adds overhead to your deployment - indexes consume space, on disk and in RAM, and indexes are maintained during updates, which adds disk IO. In other words, indexes improve the efficiency of many operations, but they also come at a cost. For many applications, index intersection will allow users to reduce the number of indexes they need while still providing rich features and low latency.

In the following sections we will take a deep dive into index intersection and how it can be applied to applications.

An Example - The Phone Book

Let’s take the example of a phone book with the following schema.

{
    FirstName
    LastName
    Phone_Number
    Address
}

If I were to search for “Smith, John” how would I index the following query to be as efficient as possible?

db.phonebook.find({ FirstName : “John”, LastName : “Smith” })

I could use an individual index on FirstName and search for all of the “Johns”.

This would look something like ensureIndex( { FirstName : 1 } )

We run this query and we get back 200,000 John Smiths. Looking at the explain() output below however, we see that we scanned 1,000,000 “Johns” in the process of finding 200,000 “John Smiths”.

> db.phonebook.find({ FirstName : "John", LastName : "Smith"}).explain()
{
    "cursor" : "BtreeCursor FirstName_1",
    "isMultiKey" : false,
    "n" : 200000,
    "nscannedObjects" : 1000000,
    "nscanned" : 1000000,
    "nscannedObjectsAllPlans" : 1000101,
    "nscannedAllPlans" : 1000101,
    "scanAndOrder" : false,
    "indexOnly" : false,
    "nYields" : 2,
    "nChunkSkips" : 0,
    "millis" : 2043,
    "indexBounds" : {
        "FirstName" : [
            [
                "John",
                "John"
            ]
        ]
    },
    "server" : "Oz-Olivo-MacBook-Pro.local:27017"
}

How about creating an individual index on LastName?

This would look something like ensureIndex( { LastName : 1 } )

Running this query we get back 200,000 “John Smiths” but our explain output says that we now scanned 400,000 “Smiths”. How can we make this better?

db.phonebook.find({ FirstName : "John", LastName : "Smith"}).explain()
{
    "cursor" : "BtreeCursor LastName_1",
    "isMultiKey" : false,
    "n" : 200000,
    "nscannedObjects" : 400000,
    "nscanned" : 400000,
    "nscannedObjectsAllPlans" : 400101,
    "nscannedAllPlans" : 400101,
    "scanAndOrder" : false,
    "indexOnly" : false,
    "nYields" : 1,
    "nChunkSkips" : 0,
    "millis" : 852,
    "indexBounds" : {
        "LastName" : [
            [
                "Smith",
                "Smith"
            ]
        ]
    },
    "server" : "Oz-Olivo-MacBook-Pro.local:27017"
}

So we know that there are 1,000,000 “John” entries, 400,000 “Smith” entries, and 200,000 “John Smith” entries in our phonebook. Is there a way that we can scan just the 200,000 we need?

In the case of a phone book this is somewhat simple; since we know that we want it to be sorted by Lastname, Firstname we can create a compound index on them, like the below.

ensureIndex( {  LastName : true, FirstName : 1  } ) 
db.phonebook.find({ FirstName : "John", LastName : "Smith"}).explain()
{
    "cursor" : "BtreeCursor LastName_1_FirstName_1",
    "isMultiKey" : false,
    "n" : 200000,
    "nscannedObjects" : 200000,
    "nscanned" : 200000,
    "nscannedObjectsAllPlans" : 200000,
    "nscannedAllPlans" : 200000,
    "scanAndOrder" : false,
    "indexOnly" : false,
    "nYields" : 0,
    "nChunkSkips" : 0,
    "millis" : 370,
    "indexBounds" : {
        "LastName" : [
            [
                "Smith",
                "Smith"
            ]
        ],
        "FirstName" : [
            [
                "John",
                "John"
            ]
        ]
    },
    "server" : "Oz-Olivo-MacBook-Pro.local:27017"
}

Looking at the explain on this, we see that the index only scanned the 200,000 documents that matched, so we got a perfect hit.

Beyond Compound Indexes

The compound index is a great solution in the case of a phonebook in which we always know how we are going to be querying our data. Now what if we have an application in which users can arbitrarily query for different fields together? We can’t possibly create a compound index for every possible combination because of the overhead imposed by indexes, as we discussed above, and because MongoDB limits you to 64 indexes per collection. Index intersection can really help.

Imagine the case of a medical application which doctors use to filter through patients. At a high level, omitting several details, a basic schema may look something like the below.

{
      Fname
      LName
      SSN
      Age
      Blood_Type
      Conditions : [] 
      Medications : [ ]
      ...
      ...
}

Some sample searches that a doctor/nurse may run on this system would look something like the below.

Find me a Patient with Blood_Type = O under the age of 50

db.patients.find( {   Blood_Type : “O”,  Age : {   $lt : 50  }     } )

Find me all patients over the age of 60 on Medication X

db.patients.find( { Medications : “X” , Age : { $gt : 60} })

Find me all Diabetic patients on medication Y

db.patients.find( { Conditions : “Diabetes”, Medications : “Y” } )

With all of the unstructured data in modern applications, along with the desire to be able to search for things as needed in an ad-hoc way, it can become very difficult to predict usage patterns. Since we can’t possibly create compound indexes for every combination of fields, because we don’t necessarily know what those will be ahead of time, we can try indexing individual fields to try to salvage some performance. But as shown above in our phone book application, this can lead to performance issues in which we pull documents into memory that are not matches.

To avoid the paging of unnecessary data, the new index intersection feature in 2.6 increases the overall efficiency of these types of ad-hoc queries by processing the indexes involved individually and then intersecting the result set to find the matching documents. This means you only pull the final matching documents into memory and everything else is processed using the indexes. This processing will utilize more CPU, but should greatly reduce the amount of IO done for queries where all of the data is not in memory as well as allow you to utilize your memory more efficiently.

For example, looking at the earlier example:

db.patients.find( {   Blood_Type : “O”,  Age : {   $lt : 50  }     } )

It is inefficient to find all patients with BloodType: O (which could be millions) and then pull into memory each document to find the ones with age

Instead, the query planner finds all patients with bloodType: O using the index on BloodType, and all patients with age

Index intersection allows for much more efficient use of existing RAM so less total memory will usually be required to fit the working set then previously. Also, if you had several compound indices that were made up of different combinations of fields, then you can reduce the total number of indexes on the system. This means storing less indices in memory as well as achieving better insert/update performance since fewer indices must be updated.

As of version 2.6.0, you cannot intersect with geo or text indices and you can intersect at most 2 separate indices with each other. These limitations are likely to change in a future release.

Optimizing Multi-key Indexes It is also possible to intersect an index with itself in the case of multi-key indexes. Consider the below query:

Find me all patients with Diabetes & High Blood Pressure

db.patients.find( {  Conditions : { $all : [ “Diabetes”, “High Blood Pressure” ]  }    }  )

In this case we will find the result set of all Patients with Diabetes, and the result set of all patients with High blood pressure, and intersect the two to get all patients with both. Again, this requires less memory and disk speed for better overall performance. As of the 2.6.0 release, an index can intersect with itself up to 10 times.

Do We Still Need Compound Indexes?

To be clear, compound indexing will ALWAYS be more performant IF you know what you are going to be querying on and can create one ahead of time. Furthermore, if your working set is entirely in memory, then you will not reap any of the benefits of Index Intersection as it is primarily based on reducing IO. But in a more ad-hoc case where one cannot predict the shape of the queries and the working set is much larger than available memory, index intersection will automatically take over and choose the most performant path.

  • Download MongoDB 2.6 Today
  • Learn about all of the key new features in MongoDB 2.6 by downloading the whitepaper
성명
본 글의 내용은 네티즌들의 자발적인 기여로 작성되었으며, 저작권은 원저작자에게 있습니다. 본 사이트는 이에 상응하는 법적 책임을 지지 않습니다. 표절이나 침해가 의심되는 콘텐츠를 발견한 경우 admin@php.cn으로 문의하세요.
새로운 MySQL 사용자에게 권한을 부여하는 방법새로운 MySQL 사용자에게 권한을 부여하는 방법May 09, 2025 am 12:16 AM

TograntpermissionSt

MySQL에서 사용자를 추가하는 방법 : 단계별 가이드MySQL에서 사용자를 추가하는 방법 : 단계별 가이드May 09, 2025 am 12:14 AM

ToadDuserSinMySqleFeffectially, 다음에 따르면, 다음 사항을 따르십시오

MySQL : 복잡한 권한이있는 새 사용자 추가MySQL : 복잡한 권한이있는 새 사용자 추가May 09, 2025 am 12:09 AM

toaddanewuser와 함께 complexpermissionsinmysql, followthesesteps : 1) createShereuser'NewUser '@'localhost'Identifiedby'pa ssword ';. 2) grantreadaccesstoalltablesin'mydatabase'withgrantselectonmydatabase.to'newuser'@'localhost';. 3) GrantWriteAccessto '

MySQL : 문자열 데이터 유형 및 콜라주MySQL : 문자열 데이터 유형 및 콜라주May 09, 2025 am 12:08 AM

MySQL의 문자열 데이터 유형에는 char, varchar, binary, varbinary, blob 및 텍스트가 포함됩니다. 콜라이트는 문자열의 비교와 분류를 결정합니다. 1. 차량은 고정 길이 스트링에 적합하고 Varchar는 가변 길이 스트링에 적합합니다. 2. 이진 및 바이너리는 이진 데이터에 사용되며 Blob 및 텍스트는 큰 객체 데이터에 사용됩니다. 3. UTF8MB4_UNICODE_CI와 같은 정렬 규칙은 상류 및 소문자를 무시하며 사용자 이름에 적합합니다. UTF8MB4_BIN은 사례에 민감하며 정확한 비교가 필요한 필드에 적합합니다.

MySQL : Varchars에는 몇 개의 길이를 사용해야합니까?MySQL : Varchars에는 몇 개의 길이를 사용해야합니까?May 09, 2025 am 12:06 AM

가장 좋은 mysqlvarchar 열 길이 선택은 데이터 분석을 기반으로하고, 향후 성장을 고려하고, 성능 영향을 평가하고, 문자 세트 요구 사항을 기반으로해야합니다. 1) 일반적인 길이를 결정하기 위해 데이터를 분석합니다. 2) 미래 확장 공간을 예약하십시오. 3) 성능에 대한 큰 길이의 영향에주의를 기울이십시오. 4) 문자 세트가 스토리지에 미치는 영향을 고려하십시오. 이러한 단계를 통해 데이터베이스의 효율성과 확장 성을 최적화 할 수 있습니다.

MySQL Blob : 한계가 있습니까?MySQL Blob : 한계가 있습니까?May 08, 2025 am 12:22 AM

mysqlblobshavelimits : tinyblob (255bodes), blob (65,535 bytes), mediumblob (16,777,215 bctes), andlongblob (4,294,967,295 Bytes) .tousebl obseffectical : 1) 고려 사항을 고려합니다

MySQL : 사용자 생성을 자동화하는 가장 좋은 도구는 무엇입니까?MySQL : 사용자 생성을 자동화하는 가장 좋은 도구는 무엇입니까?May 08, 2025 am 12:22 AM

MySQL에서 사용자 생성을 자동화하기위한 최고의 도구 및 기술은 다음과 같습니다. 1. MySQLworkBench, 중소형 환경에 적합하고 사용하기 쉽지만 자원 소비가 높습니다. 2. 다중 서버 환경에 적합한 Ansible, 간단하지만 가파른 학습 곡선; 3. 사용자 정의 파이썬 스크립트, 유연하지만 스크립트 보안을 보장해야합니다. 4. 꼭두각시와 요리사는 대규모 환경에 적합하며 복잡하지만 확장 가능합니다. 선택할 때 척도, 학습 곡선 및 통합 요구를 고려해야합니다.

MySQL : 블로브 내부를 검색 할 수 있습니까?MySQL : 블로브 내부를 검색 할 수 있습니까?May 08, 2025 am 12:20 AM

예, youcansearchinsideablobinmysqlusingspecifictechniques.1) converttheblobtoautf-8stringwithConvertFunctionandSearchusing

See all articles

핫 AI 도구

Undresser.AI Undress

Undresser.AI Undress

사실적인 누드 사진을 만들기 위한 AI 기반 앱

AI Clothes Remover

AI Clothes Remover

사진에서 옷을 제거하는 온라인 AI 도구입니다.

Undress AI Tool

Undress AI Tool

무료로 이미지를 벗다

Clothoff.io

Clothoff.io

AI 옷 제거제

Video Face Swap

Video Face Swap

완전히 무료인 AI 얼굴 교환 도구를 사용하여 모든 비디오의 얼굴을 쉽게 바꾸세요!

뜨거운 도구

안전한 시험 브라우저

안전한 시험 브라우저

안전한 시험 브라우저는 온라인 시험을 안전하게 치르기 위한 보안 브라우저 환경입니다. 이 소프트웨어는 모든 컴퓨터를 안전한 워크스테이션으로 바꿔줍니다. 이는 모든 유틸리티에 대한 액세스를 제어하고 학생들이 승인되지 않은 리소스를 사용하는 것을 방지합니다.

Dreamweaver Mac版

Dreamweaver Mac版

시각적 웹 개발 도구

SecList

SecList

SecLists는 최고의 보안 테스터의 동반자입니다. 보안 평가 시 자주 사용되는 다양한 유형의 목록을 한 곳에 모아 놓은 것입니다. SecLists는 보안 테스터에게 필요할 수 있는 모든 목록을 편리하게 제공하여 보안 테스트를 더욱 효율적이고 생산적으로 만드는 데 도움이 됩니다. 목록 유형에는 사용자 이름, 비밀번호, URL, 퍼징 페이로드, 민감한 데이터 패턴, 웹 셸 등이 포함됩니다. 테스터는 이 저장소를 새로운 테스트 시스템으로 간단히 가져올 수 있으며 필요한 모든 유형의 목록에 액세스할 수 있습니다.

ZendStudio 13.5.1 맥

ZendStudio 13.5.1 맥

강력한 PHP 통합 개발 환경

SublimeText3 Mac 버전

SublimeText3 Mac 버전

신 수준의 코드 편집 소프트웨어(SublimeText3)