Home >Java >javaTutorial >Detailed explanation of practical tutorials on using Elasticsearch in spring

Detailed explanation of practical tutorials on using Elasticsearch in spring

零下一度
零下一度Original
2017-05-26 14:07:182890browse

This article mainly introduces the detailed code implementation of using Elasticsearch in spring. It has certain reference value. Those who are interested can learn about it.

Before using Elasticsearch, let me talk about some dry information.

1. Both ES and Solr appear as full-text search engines. Both are search servers based on Lucene.

2. ES is not a reliable storage system or a database. It has the risk of losing data.

3. ES is not a real-time system. The success of data writing is only the success of trans log (similar to the bin log of MySQL). Query## immediately after the writing is successful. #It is normal that it cannot be found. Because the data may still be in memory at the moment rather than entering the storage engine. In the same way, after deleting a piece of data, it does not disappear immediately. When can the write be queried? There is a background thread inside ES that regularly writes a batch of data in the memory to the storage engine, and the data is visible thereafter. By default, the background thread runs once per second. The more frequently this thread is run, the lower the write performance. The lower the running frequency, the higher the write performance (not infinitely high).

4. Currently known single ES clusters can store PB-level data, but this is very laborious. There is no pressure on terabyte level data.


5. If you use the jar package officially provided by ES to access, you need JDK1.7 and above.


6. Use the corresponding version to access the ES server. If the ES server version is 1.7, please use the ES 1.7 client. If the ES server is 2.1, please use the 2.1 client.


7. ES index exists on the file system of the Linux server (behind is the file system, not a distributed file system similar to HDFS)


8. ES Java The client is thread-safe. Building one globally can meet the reading and writing needs. Do not create an ES client every time. Building a new es client every time you access ES will throw an exception.


9. It is highly not recommended to use the dynamic identification and creation mechanism of ES, because in many cases this is not what you need. The recommended approach is to carefully create mappings before writing data.


10. It is strongly not recommended to use deep paging in ES. This may cause the cluster to become unavailable.


11. ES is a static sharding. Once the number of shards is determined when creating the index, it cannot be modified subsequently.


12. Type is provided in ES. Many people think that type is a physical table, and the data of a type is stored independently; but this is not the case inside ES. Type is just a field inside ES. . Therefore, when a lot of data can be divided into independent indexes, do not put it into one index and use type to divide it. It is reasonable to use type only in the case of nested classes and parent-child classes.


13. ES does not provide native Chinese word segmentation capabilities. There are third-party Chinese word segmentation plug-ins, such as ik, etc. Ik is a toy word segmenter. If you have serious word segmentation needs, please use an independent word segmenter to segment the words and write them to ES before using ES.


14. The index in ES will be sharded first. Each sharded data will generally have its own copy of the data. ES’s shard allocation strategy will ensure that the same sharded data is the same as its own. The replicas will not be assigned to the same node. When a node in the cluster goes down, the ES master will find that the node is not alive when pinging the node through a certain strategy; the ES recovery process will be started


15. ES is not up

date ability. All updates mark the old document for deletion and then re-insert a new document.

Okay, back to the topic.


First:

Add our spring configuration

<bean id="client" factory-bean="esClientBuilder" factory-method="init" destroy-method="close"/> 
 
  <bean id="esClientBuilder" class="com.***.EsClientBuilder"> 
    <property name="clusterName" value="集群名称" /> 
    <property name="nodeIpInfo" value="集群地址" /> 
  </bean>

Secondly:

Write our EsClientBuilder class to initialize our ES parameters

package ***; 
import java.net.InetAddress; 
import java.net.UnknownHostException; 
import java.util.HashMap; 
import java.util.Map; 
import org.elasticsearch.client.Client; 
import org.elasticsearch.client.transport.TransportClient; 
import org.elasticsearch.common.settings.Settings; 
import org.elasticsearch.common.transport.InetSocketTransportAddress; 
public class EsClientBuilder { 
 
 
  private String clusterName; 
  private String nodeIpInfo; 
  private TransportClient client; 
 
  public Client init(){ 
    //设置集群的名字 
    Settings settings = Settings.settingsBuilder() 
        .put("client.transport.sniff", false) 
        .put("cluster.name", clusterName) 
        .build(); 
    //创建集群client并添加集群节点地址 
    client = TransportClient.builder().settings(settings).build(); 
    Map<String, Integer> nodeMap = parseNodeIpInfo(); 
    for (Map.Entry<String,Integer> entry : nodeMap.entrySet()){ 
      try { 
        client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(entry.getKey()), entry.getValue())); 
      } catch (UnknownHostException e) { 
        e.printStackTrace(); 
      } 
    } 
 
    return client; 
  } 
 
  /** 
   * 解析节点IP信息,多个节点用逗号隔开,IP和端口用冒号隔开 
   * 
   * @return 
   */ 
  private Map<String, Integer> parseNodeIpInfo(){ 
    String[] nodeIpInfoArr = nodeIpInfo.split(","); 
    Map<String, Integer> map = new HashMap<String, Integer>(nodeIpInfoArr.length); 
    for (String ipInfo : nodeIpInfoArr){ 
      String[] ipInfoArr = ipInfo.split(":"); 
      map.put(ipInfoArr[0], Integer.parseInt(ipInfoArr[1])); 
    } 
    return map; 
  } 
 
  public String getClusterName() { 
    return clusterName; 
  } 
 
  public void setClusterName(String clusterName) { 
    this.clusterName = clusterName; 
  } 
 
  public String getNodeIpInfo() { 
    return nodeIpInfo; 
  } 
 
  public void setNodeIpInfo(String nodeIpInfo) { 
    this.nodeIpInfo = nodeIpInfo; 
  } 
}

Finally:

Now we can write our own service class, This class can operate our es through the native

api of es (the 2.X version we show here) indexName is equivalent to the database name, and typeName is equivalent to the table name

Please refer to the EsServiceImpl.Java file

package ***; 
@Service("esService") 
public class EsServiceImpl{ 
  @Autowired 
  private Client client; 
 
  /** 
   * 用docId获取document 
   * @param indexName 
   * @param typeName 
   * @param docId 
   */ 
  private static void getWithId(String indexName, String typeName, String docId) { 
    //get with id 
    GetResponse gResponse = client.prepareGet(indexName, typeName, docId).execute().actionGet(); 
    System.out.println(gResponse.getIndex()); 
    System.out.println(gResponse.getType()); 
    System.out.println(gResponse.getVersion()); 
    System.out.println(gResponse.isExists()); 
    Map<String, Object> results = gResponse.getSource(); 
    if(results != null) { 
      for(String key : results.keySet()) { 
        Object field = results.get(key); 
        System.out.println(key); 
        System.out.println(field); 
      } 
    } 
  } 
  private static void indexWithBulk(String index, String type) { 
        //指定索引名称,type名称和documentId(documentId可选,不设置则系统自动生成)创建document 
        IndexRequest ir1 = new IndexRequest(); 
        String source1 = "{" + "\"user\":\"kimchy\"," + "\"price\":\"6.3\"," + "\"tid\":\"20001\"," + "\"message\":\"Elasticsearch\"" + "}"; 
        ir1.index(index).type(type).id("100").source(source1); 
        IndexRequest ir2 = new IndexRequest(); 
        String source2 = "{" + "\"user\":\"kimchy2\"," + "\"price\":\"7.3\"," + "\"tid\":\"20002\"," + "\"message\":\"Elasticsearch\"" + "}"; 
        ir2.index(index).type(type).id("102").source(source2); 
        IndexRequest ir3 = new IndexRequest(); 
        String source3 = "{" + "\"user\":\"kimchy3\"," + "\"price\":\"8.3\"," + "\"tid\":\"20003\"," + "\"message\":\"Elasticsearch\"" + "}"; 
        ir3.index(index).type(type).id("103").source(source3); 
        BulkResponse response = client.prepareBulk().add(ir1).add(ir2).add(ir3).execute().actionGet(); 
        BulkItemResponse[] responses = response.getItems(); 
        if(responses != null && responses.length > 0) { 
          for(BulkItemResponse r : responses) { 
            String i = r.getIndex(); 
            String t = r.getType(); 
            System.out.println(i+","+t); 
          } 
        } 
     
  } 
  private static void sumCountSearch(String indexName, String typeName, 
    String sumField, String countField, String searchField, String searchValue) { 
    SumBuilder sb = AggregationBuilders.sum("sumPrice").field(sumField); 
    TermQueryBuilder tb = QueryBuilders.termQuery(searchField, searchValue); 
    SearchResponse sResponse = client.prepareSearch(indexName).setTypes(typeName).setQuery(tb).addAggregation(sb).execute().actionGet(); 
    Map<String, Aggregation> aggMap = sResponse.getAggregations().asMap(); 
    if(aggMap != null && aggMap.size() > 0) { 
      for(String key : aggMap.keySet()) { 
        if("sumPrice".equals(key)) { 
          Sum s = (Sum)aggMap.get(key); 
          System.out.println(key + "," + s.getValue());   
        } 
        else if("countTid".equals(key)) { 
          StatsBuilder c = (StatsBuilder)aggMap.get(key); 
          System.out.println(key + "," + c.toString()); 
        } 
      } 
    } 
  } 
  private static void updateDoc(String indexName, String typeName, String id) throws IOException, InterruptedException, ExecutionException { 
    UpdateRequest updateRequest = new UpdateRequest(); 
    updateRequest.index(indexName); 
    updateRequest.type(typeName); 
    updateRequest.id(id); 
    updateRequest.doc(jsonBuilder().startObject().field("gender", "male").endObject()); 
    UpdateResponse resp = client.update(updateRequest).get(); 
    resp.getClass(); 
  } 
  private static void scrollSearch(String indexName, String typeName, String... ids) { 
    IdsQueryBuilder qb = QueryBuilders.idsQuery().addIds(ids); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        .setSearchType(SearchType.SCAN) 
        .setQuery(qb) 
        .setScroll(new TimeValue(100)) 
        .setSize(50) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
     
    while (true) { 
      SearchHits hits = sResponse.getHits(); 
      SearchHit[] hitArray = hits.getHits(); 
      for(int i = 0; i < hitArray.length; i++) { 
        SearchHit hit = hitArray[i]; 
        Map<String, Object> fields = hit.getSource(); 
        for(String key : fields.keySet()) { 
          System.out.println(key); 
          System.out.println(fields.get(key)); 
        } 
      } 
      sResponse = client.prepareSearchScroll(sResponse.getScrollId()).setScroll(new TimeValue(100)).execute().actionGet(); 
      if (sResponse.getHits().getHits().length == 0) { 
        break; 
      } 
    } 
  } 
  private static void deleteDocuments(String string, String string2) { 
    SearchResponse sResponse = client.prepareSearch(string) 
        .setTypes(string2) 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        .setQuery(QueryBuilders.matchAllQuery()) 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    List<String> ids = new ArrayList<String>(hitArray.length); 
    for(int i = 0; i < count; i++) { 
      System.out.println("=================================="); 
      SearchHit hit = hitArray[i]; 
      ids.add(hit.getId()); 
       
    } 
    for(String id : ids) { 
      DeleteResponse response = client.prepareDelete(string, string2, id).execute().actionGet(); 
    }   
  } 
  private static void dateRangeSearch(String indexName, String typeName, 
      String termName, String from, String to) { 
    // 构建range query 
    //2015-08-20 12:27:11 
        QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(from).to(to); 
        SearchResponse sResponse = client.prepareSearch(indexName) 
            .setTypes(typeName) 
            // 设置search type 
            // 常用search type用:query_then_fetch 
            // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
            .setSearchType(SearchType.QUERY_THEN_FETCH) 
            // 查询的termName和termvalue 
            .setQuery(qb) 
            // 设置排序field 
            .addSort(termName, SortOrder.DESC) 
            // 设置分页 
            .setFrom(0).setSize(60).execute().actionGet(); 
        int tShards = sResponse.getTotalShards(); 
        long timeCost = sResponse.getTookInMillis(); 
        int sShards = sResponse.getSuccessfulShards(); 
        System.out.println(tShards + "," + timeCost + "," + sShards); 
        SearchHits hits = sResponse.getHits(); 
        long count = hits.getTotalHits(); 
        SearchHit[] hitArray = hits.getHits(); 
        for (int i = 0; i < count; i++) { 
          SearchHit hit = hitArray[i]; 
          Map<String, Object> fields = hit.getSource(); 
          for (String key : fields.keySet()) { 
            System.out.println(key); 
            System.out.println(fields.get(key)); 
          } 
        } 
  } 
  private static void dateRangeSearch2(String indexName, String typeName, 
      String termName, String from, String to) { 
    // 构建range query 
      QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(from).to(to); 
      SearchResponse sResponse = client.prepareSearch(indexName) 
          .setTypes(typeName) 
          // 设置search type 
          // 常用search type用:query_then_fetch 
          // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
          .setSearchType(SearchType.QUERY_THEN_FETCH) 
          // 查询的termName和termvalue 
          .setQuery(qb) 
          // 设置排序field 
          .addSort(termName, SortOrder.DESC) 
          // 设置分页 
          .setFrom(0).setSize(60).execute().actionGet(); 
      int tShards = sResponse.getTotalShards(); 
      long timeCost = sResponse.getTookInMillis(); 
      int sShards = sResponse.getSuccessfulShards(); 
      System.out.println(tShards + "," + timeCost + "," + sShards); 
      SearchHits hits = sResponse.getHits(); 
      long count = hits.getTotalHits(); 
      SearchHit[] hitArray = hits.getHits(); 
      for (int i = 0; i < count; i++) { 
        SearchHit hit = hitArray[i]; 
        Map<String, Object> fields = hit.getSource(); 
        for (String key : fields.keySet()) { 
          System.out.println(key); 
          System.out.println(fields.get(key)); 
        } 
      } 
  } 
  private static void countWithQuery(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
    //search result get source 
        CountResponse cResponse = client.prepareCount(indexName) 
            .setTypes(typeName) 
            .setQuery(QueryBuilders.termQuery(termName, termValue)) 
            .execute() 
            .actionGet(); 
        int tShards = cResponse.getTotalShards(); 
        int sShards = cResponse.getSuccessfulShards(); 
        System.out.println(tShards+","+sShards); 
        long count = cResponse.getCount();    
  } 
  private static void rangeSearchWithOtherSearch(String indexName, String typeName, 
      String termName, String min, String max, String termQueryField) { 
    // 构建range query 
        QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(min).to(max); 
        TermQueryBuilder tb = QueryBuilders.termQuery(termName, termQueryField); 
        BoolQueryBuilder bq = boolQuery().must(qb).must(tb); 
        SearchResponse sResponse = client.prepareSearch(indexName) 
            .setTypes(typeName) 
            // 设置search type 
            // 常用search type用:query_then_fetch 
            // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
            .setSearchType(SearchType.QUERY_THEN_FETCH) 
            // 查询的termName和termvalue 
            .setQuery(bq) 
            // 设置排序field 
            .addSort(termName, SortOrder.DESC) 
            // 设置分页 
            .setFrom(0).setSize(60).execute().actionGet(); 
        int tShards = sResponse.getTotalShards(); 
        long timeCost = sResponse.getTookInMillis(); 
        int sShards = sResponse.getSuccessfulShards(); 
        System.out.println(tShards + "," + timeCost + "," + sShards); 
        SearchHits hits = sResponse.getHits(); 
        long count = hits.getTotalHits(); 
        SearchHit[] hitArray = hits.getHits(); 
        for (int i = 0; i < count; i++) { 
          SearchHit hit = hitArray[i]; 
          Map<String, Object> fields = hit.getSource(); 
          for (String key : fields.keySet()) { 
            System.out.println(key); 
            System.out.println(fields.get(key)); 
          } 
        } 
  } 
  private static void termRangeSearch(String indexName, String typeName, 
    String termName, String min, String max, String highlightField) { 
    QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(min).to(max); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        // 设置search type 
        // 常用search type用:query_then_fetch 
        // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        // 查询的termName和termvalue 
        .setQuery(qb) 
        // 设置排序field 
        .addSort(termName, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        // 设置分页 
        .setFrom(0).setSize(60).execute().actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards + "," + timeCost + "," + sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for (int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for (String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  private static void sumOneField(String indexName, String typeName, String fieldName) { 
    SumBuilder sb = AggregationBuilders.sum("sum").field(fieldName); 
    //search result get source 
    SearchResponse sResponse = client.prepareSearch(indexName).setTypes(typeName).addAggregation(sb).execute().actionGet(); 
    Map<String, Aggregation> aggMap = sResponse.getAggregations().asMap(); 
    if(aggMap != null && aggMap.size() > 0) { 
      for(String key : aggMap.keySet()) { 
        Sum s = (Sum)aggMap.get(key); 
        System.out.println(s.getValue()); 
      } 
    } 
  } 
  private static void searchWithTermQueryAndRetureSpecifiedFields(String indexName, String typeName, String termName,String termValue, String sortField, String highlightField,String... fields) { 
     SearchRequestBuilder sb = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        // 设置search type 
        // 常用search type用:query_then_fetch 
        // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        // 查询的termName和termvalue 
        .setQuery(QueryBuilders.termQuery(termName, termValue)) 
        // 设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        // 设置高亮field 
        .addHighlightedField(highlightField) 
        // 设置分页 
        .setFrom(0).setSize(60); 
    for (String field : fields) { 
      sb.addField(field); 
    } 
    SearchResponse sResponse = sb.execute().actionGet(); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for (int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, SearchHitField> fm = hit.getFields(); 
      for (String key : fm.keySet()) { 
        SearchHitField f = fm.get(key); 
        System.out.println(f.getName()); 
        System.out.println(f.getValue()); 
      } 
    } 
  } 
  private static void searchWithIds(String indexName, String typeName, String sortField, String highlightField, String... ids) { 
    IdsQueryBuilder qb = QueryBuilders.idsQuery().addIds(ids); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(qb) 
        //设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
 
  /** 
   * 在index:indexName, type:typeName中做通配符查询 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param termValue 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void wildcardSearch(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
    QueryBuilder qb = QueryBuilders.wildcardQuery(termName, termValue); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(qb) 
        //设置排序field 
//       .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
//       .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
 
  /** 
   * 在index:indexName, type:typeName中做模糊查询 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param termValue 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void fuzzySearch(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
     QueryBuilder qb = QueryBuilders.fuzzyQuery(termName, termValue); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(qb) 
        //设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 在index:indexName, type:typeName中做区间查询 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param min 
   * @param max 
   * @param highlightField 
   */ 
  private static void numericRangeSearch(String indexName, String typeName, 
      String termName, double min, double max, String highlightField) { 
    // 构建range query 
    QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(min).to(max); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        // 设置search type 
        // 常用search type用:query_then_fetch 
        // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        // 查询的termName和termvalue 
        .setQuery(qb) 
        // 设置排序field 
        .addSort(termName, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        // 设置分页 
        .setFrom(0).setSize(60).execute().actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards + "," + timeCost + "," + sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for (int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for (String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 在索引indexName, type为typeName中查找两个term:term1(termName1, termValue1)和term2(termName2, termValue2) 
   * @param indexName 
   * @param typeName 
   * @param termName1 
   * @param termValue1 
   * @param termName2 
   * @param termValue2 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void searchWithBooleanQuery(String indexName, String typeName, String termName1, String termValue1,  
       String termName2, String termValue2, String sortField, String highlightField) { 
    //构建boolean query 
    BoolQueryBuilder bq = boolQuery().must(termQuery(termName1, termValue1)).must(termQuery(termName2, termValue2));   
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(bq) 
        //设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 在索引indexName, type为typeName中查找term(termName, termValue) 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param termValue 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void searchWithTermQuery(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(QueryBuilders.termQuery(termName, termValue)) 
        //设置排序field 
//       .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
//       .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      System.out.println("=================================="); 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 用java的map构建document 
   */ 
  private static void indexWithMap(String indexName, String typeName) { 
    Map<String, Object> json = new HashMap<String, Object>(); 
    //设置document的field 
    json.put("user","kimchy2"); 
    json.put("postDate",new Date()); 
    json.put("price",6.4); 
    json.put("message","Elasticsearch"); 
    json.put("tid","10002"); 
    json.put("endTime","2015-08-25 09:00:00"); 
    //指定索引名称,type名称和documentId(documentId可选,不设置则系统自动生成)创建document 
    IndexResponse response = client.prepareIndex(indexName, typeName, "2").setSource(json).execute().actionGet(); 
    //response中返回索引名称,type名称,doc的Id和版本信息 
    String index = response.getIndex(); 
    String type = response.getType(); 
    String id = response.getId(); 
    long version = response.getVersion(); 
    boolean created = response.isCreated(); 
    System.out.println(index+","+type+","+id+","+version+","+created); 
  } 
 
  /** 
   * 用java字符串创建document 
   */ 
  private static void indexWithStr(String indexName, String typeName) { 
    //手工构建json字符串 
    //该document包含user, postData和message三个field 
    String json = "{" + "\"user\":\"kimchy\"," + "\"postDate\":\"2013-01-30\"," + "\"price\":\"6.3\"," + "\"tid\":\"10001\"," + "}"; 
    //指定索引名称,type名称和documentId(documentId可选,不设置则系统自动生成)创建document 
    IndexResponse response = client.prepareIndex(indexName, typeName, "1") 
        .setSource(json) 
        .execute() 
        .actionGet(); 
    //response中返回索引名称,type名称,doc的Id和版本信息 
    String index = response.getIndex(); 
    String type = response.getType(); 
    String id = response.getId(); 
    long version = response.getVersion(); 
    boolean created = response.isCreated(); 
    System.out.println(index+","+type+","+id+","+version+","+created); 
  } 
   
  private static void deleteDocWithId(String indexName, String typeName, String docId) { 
    DeleteResponse dResponse = client.prepareDelete(indexName, typeName, docId).execute().actionGet(); 
    String index = dResponse.getIndex(); 
    String type = dResponse.getType(); 
    String id = dResponse.getId(); 
    long version = dResponse.getVersion(); 
    System.out.println(index+","+type+","+id+","+version); 
  } 
   
  /** 
   * 创建索引 
   * 注意:在生产环节中通知es集群的owner去创建index 
   * @param client 
   * @param indexName 
   * @param documentType 
   * @throws IOException 
   */ 
  private static void createIndex(String indexName, String documentType) throws IOException { 
    final IndicesExistsResponse iRes = client.admin().indices().prepareExists(indexName).execute().actionGet(); 
    if (iRes.isExists()) { 
      client.admin().indices().prepareDelete(indexName).execute().actionGet(); 
    } 
    client.admin().indices().prepareCreate(indexName).setSettings(Settings.settingsBuilder().put("number_of_shards", 1).put("number_of_replicas", "0")).execute().actionGet(); 
    XContentBuilder mapping = jsonBuilder() 
        .startObject() 
           .startObject(documentType) 
//           .startObject("_routing").field("path","tid").field("required", "true").endObject() 
           .startObject("_source").field("enabled", "true").endObject() 
           .startObject("_all").field("enabled", "false").endObject() 
             .startObject("properties") 
               .startObject("user") 
                 .field("store", true) 
                 .field("type", "string") 
                 .field("index", "not_analyzed") 
                .endObject() 
                .startObject("message") 
                 .field("store", true) 
                 .field("type","string") 
                 .field("index", "analyzed") 
                 .field("analyzer", "standard") 
                .endObject() 
                .startObject("price") 
                 .field("store", true) 
                 .field("type", "float") 
                .endObject() 
                .startObject("nv1") 
                 .field("store", true) 
                 .field("type", "integer") 
                 .field("index", "no") 
                 .field("null_value", 0) 
                .endObject() 
                .startObject("nv2") 
                 .field("store", true) 
                 .field("type", "integer") 
                 .field("index", "not_analyzed") 
                 .field("null_value", 10) 
                .endObject() 
                .startObject("tid") 
                 .field("store", true) 
                 .field("type", "string") 
                 .field("index", "not_analyzed") 
                .endObject() 
                .startObject("endTime") 
                 .field("type", "date") 
                 .field("store", true) 
                 .field("index", "not_analyzed") 
                 .field("format", "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd&#39;T&#39;HH:mm:ss.SSSZ") 
                .endObject() 
                .startObject("date") 
                 .field("type", "date") 
                .endObject() 
             .endObject() 
           .endObject() 
          .endObject(); 
    client.admin().indices() 
        .preparePutMapping(indexName) 
        .setType(documentType) 
        .setSource(mapping) 
        .execute().actionGet(); 
  } 
}

[Related recommendations]

1.

Detailed explanation of usage code examples of Spring framework annotations

2 .

Java transaction management learning: Detailed code explanation of Spring and Hibernate

3.

Sharing example tutorials on using Spring Boot to develop Restful programs

4.

Detailed explanation of the seven return methods supported by spring mvc

5.

Detailed explanation of Spring's enhanced implementation examples based on Aspect

6. PHPRPC for Java Spring example

7. Java Spring mvc operation Redis and Redis cluster

The above is the detailed content of Detailed explanation of practical tutorials on using Elasticsearch in spring. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn