search
HomeJavajavaTutorialDetailed explanation of practical tutorials on using Elasticsearch in spring

This article mainly introduces the detailed code implementation of using Elasticsearch in spring. It has certain reference value. Those who are interested can learn about it.

Before using Elasticsearch, let me talk about some dry information.

1. Both ES and Solr appear as full-text search engines. Both are search servers based on Lucene.

2. ES is not a reliable storage system or a database. It has the risk of losing data.

3. ES is not a real-time system. The success of data writing is only the success of trans log (similar to the bin log of MySQL). Query## immediately after the writing is successful. #It is normal that it cannot be found. Because the data may still be in memory at the moment rather than entering the storage engine. In the same way, after deleting a piece of data, it does not disappear immediately. When can the write be queried? There is a background thread inside ES that regularly writes a batch of data in the memory to the storage engine, and the data is visible thereafter. By default, the background thread runs once per second. The more frequently this thread is run, the lower the write performance. The lower the running frequency, the higher the write performance (not infinitely high).

4. Currently known single ES clusters can store PB-level data, but this is very laborious. There is no pressure on terabyte level data.


5. If you use the jar package officially provided by ES to access, you need JDK1.7 and above.


6. Use the corresponding version to access the ES server. If the ES server version is 1.7, please use the ES 1.7 client. If the ES server is 2.1, please use the 2.1 client.


7. ES index exists on the file system of the Linux server (behind is the file system, not a distributed file system similar to HDFS)


8. ES Java The client is thread-safe. Building one globally can meet the reading and writing needs. Do not create an ES client every time. Building a new es client every time you access ES will throw an exception.


9. It is highly not recommended to use the dynamic identification and creation mechanism of ES, because in many cases this is not what you need. The recommended approach is to carefully create mappings before writing data.


10. It is strongly not recommended to use deep paging in ES. This may cause the cluster to become unavailable.


11. ES is a static sharding. Once the number of shards is determined when creating the index, it cannot be modified subsequently.


12. Type is provided in ES. Many people think that type is a physical table, and the data of a type is stored independently; but this is not the case inside ES. Type is just a field inside ES. . Therefore, when a lot of data can be divided into independent indexes, do not put it into one index and use type to divide it. It is reasonable to use type only in the case of nested classes and parent-child classes.


13. ES does not provide native Chinese word segmentation capabilities. There are third-party Chinese word segmentation plug-ins, such as ik, etc. Ik is a toy word segmenter. If you have serious word segmentation needs, please use an independent word segmenter to segment the words and write them to ES before using ES.


14. The index in ES will be sharded first. Each sharded data will generally have its own copy of the data. ES’s shard allocation strategy will ensure that the same sharded data is the same as its own. The replicas will not be assigned to the same node. When a node in the cluster goes down, the ES master will find that the node is not alive when pinging the node through a certain strategy; the ES recovery process will be started


15. ES is not up

date ability. All updates mark the old document for deletion and then re-insert a new document.

Okay, back to the topic.


First:

Add our spring configuration

<bean id="client" factory-bean="esClientBuilder" factory-method="init" destroy-method="close"/> 
 
  <bean id="esClientBuilder" class="com.***.EsClientBuilder"> 
    <property name="clusterName" value="集群名称" /> 
    <property name="nodeIpInfo" value="集群地址" /> 
  </bean>

Secondly:

Write our EsClientBuilder class to initialize our ES parameters

package ***; 
import java.net.InetAddress; 
import java.net.UnknownHostException; 
import java.util.HashMap; 
import java.util.Map; 
import org.elasticsearch.client.Client; 
import org.elasticsearch.client.transport.TransportClient; 
import org.elasticsearch.common.settings.Settings; 
import org.elasticsearch.common.transport.InetSocketTransportAddress; 
public class EsClientBuilder { 
 
 
  private String clusterName; 
  private String nodeIpInfo; 
  private TransportClient client; 
 
  public Client init(){ 
    //设置集群的名字 
    Settings settings = Settings.settingsBuilder() 
        .put("client.transport.sniff", false) 
        .put("cluster.name", clusterName) 
        .build(); 
    //创建集群client并添加集群节点地址 
    client = TransportClient.builder().settings(settings).build(); 
    Map<String, Integer> nodeMap = parseNodeIpInfo(); 
    for (Map.Entry<String,Integer> entry : nodeMap.entrySet()){ 
      try { 
        client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(entry.getKey()), entry.getValue())); 
      } catch (UnknownHostException e) { 
        e.printStackTrace(); 
      } 
    } 
 
    return client; 
  } 
 
  /** 
   * 解析节点IP信息,多个节点用逗号隔开,IP和端口用冒号隔开 
   * 
   * @return 
   */ 
  private Map<String, Integer> parseNodeIpInfo(){ 
    String[] nodeIpInfoArr = nodeIpInfo.split(","); 
    Map<String, Integer> map = new HashMap<String, Integer>(nodeIpInfoArr.length); 
    for (String ipInfo : nodeIpInfoArr){ 
      String[] ipInfoArr = ipInfo.split(":"); 
      map.put(ipInfoArr[0], Integer.parseInt(ipInfoArr[1])); 
    } 
    return map; 
  } 
 
  public String getClusterName() { 
    return clusterName; 
  } 
 
  public void setClusterName(String clusterName) { 
    this.clusterName = clusterName; 
  } 
 
  public String getNodeIpInfo() { 
    return nodeIpInfo; 
  } 
 
  public void setNodeIpInfo(String nodeIpInfo) { 
    this.nodeIpInfo = nodeIpInfo; 
  } 
}

Finally:

Now we can write our own service class, This class can operate our es through the native

api of es (the 2.X version we show here) indexName is equivalent to the database name, and typeName is equivalent to the table name

Please refer to the EsServiceImpl.Java file

package ***; 
@Service("esService") 
public class EsServiceImpl{ 
  @Autowired 
  private Client client; 
 
  /** 
   * 用docId获取document 
   * @param indexName 
   * @param typeName 
   * @param docId 
   */ 
  private static void getWithId(String indexName, String typeName, String docId) { 
    //get with id 
    GetResponse gResponse = client.prepareGet(indexName, typeName, docId).execute().actionGet(); 
    System.out.println(gResponse.getIndex()); 
    System.out.println(gResponse.getType()); 
    System.out.println(gResponse.getVersion()); 
    System.out.println(gResponse.isExists()); 
    Map<String, Object> results = gResponse.getSource(); 
    if(results != null) { 
      for(String key : results.keySet()) { 
        Object field = results.get(key); 
        System.out.println(key); 
        System.out.println(field); 
      } 
    } 
  } 
  private static void indexWithBulk(String index, String type) { 
        //指定索引名称,type名称和documentId(documentId可选,不设置则系统自动生成)创建document 
        IndexRequest ir1 = new IndexRequest(); 
        String source1 = "{" + "\"user\":\"kimchy\"," + "\"price\":\"6.3\"," + "\"tid\":\"20001\"," + "\"message\":\"Elasticsearch\"" + "}"; 
        ir1.index(index).type(type).id("100").source(source1); 
        IndexRequest ir2 = new IndexRequest(); 
        String source2 = "{" + "\"user\":\"kimchy2\"," + "\"price\":\"7.3\"," + "\"tid\":\"20002\"," + "\"message\":\"Elasticsearch\"" + "}"; 
        ir2.index(index).type(type).id("102").source(source2); 
        IndexRequest ir3 = new IndexRequest(); 
        String source3 = "{" + "\"user\":\"kimchy3\"," + "\"price\":\"8.3\"," + "\"tid\":\"20003\"," + "\"message\":\"Elasticsearch\"" + "}"; 
        ir3.index(index).type(type).id("103").source(source3); 
        BulkResponse response = client.prepareBulk().add(ir1).add(ir2).add(ir3).execute().actionGet(); 
        BulkItemResponse[] responses = response.getItems(); 
        if(responses != null && responses.length > 0) { 
          for(BulkItemResponse r : responses) { 
            String i = r.getIndex(); 
            String t = r.getType(); 
            System.out.println(i+","+t); 
          } 
        } 
     
  } 
  private static void sumCountSearch(String indexName, String typeName, 
    String sumField, String countField, String searchField, String searchValue) { 
    SumBuilder sb = AggregationBuilders.sum("sumPrice").field(sumField); 
    TermQueryBuilder tb = QueryBuilders.termQuery(searchField, searchValue); 
    SearchResponse sResponse = client.prepareSearch(indexName).setTypes(typeName).setQuery(tb).addAggregation(sb).execute().actionGet(); 
    Map<String, Aggregation> aggMap = sResponse.getAggregations().asMap(); 
    if(aggMap != null && aggMap.size() > 0) { 
      for(String key : aggMap.keySet()) { 
        if("sumPrice".equals(key)) { 
          Sum s = (Sum)aggMap.get(key); 
          System.out.println(key + "," + s.getValue());   
        } 
        else if("countTid".equals(key)) { 
          StatsBuilder c = (StatsBuilder)aggMap.get(key); 
          System.out.println(key + "," + c.toString()); 
        } 
      } 
    } 
  } 
  private static void updateDoc(String indexName, String typeName, String id) throws IOException, InterruptedException, ExecutionException { 
    UpdateRequest updateRequest = new UpdateRequest(); 
    updateRequest.index(indexName); 
    updateRequest.type(typeName); 
    updateRequest.id(id); 
    updateRequest.doc(jsonBuilder().startObject().field("gender", "male").endObject()); 
    UpdateResponse resp = client.update(updateRequest).get(); 
    resp.getClass(); 
  } 
  private static void scrollSearch(String indexName, String typeName, String... ids) { 
    IdsQueryBuilder qb = QueryBuilders.idsQuery().addIds(ids); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        .setSearchType(SearchType.SCAN) 
        .setQuery(qb) 
        .setScroll(new TimeValue(100)) 
        .setSize(50) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
     
    while (true) { 
      SearchHits hits = sResponse.getHits(); 
      SearchHit[] hitArray = hits.getHits(); 
      for(int i = 0; i < hitArray.length; i++) { 
        SearchHit hit = hitArray[i]; 
        Map<String, Object> fields = hit.getSource(); 
        for(String key : fields.keySet()) { 
          System.out.println(key); 
          System.out.println(fields.get(key)); 
        } 
      } 
      sResponse = client.prepareSearchScroll(sResponse.getScrollId()).setScroll(new TimeValue(100)).execute().actionGet(); 
      if (sResponse.getHits().getHits().length == 0) { 
        break; 
      } 
    } 
  } 
  private static void deleteDocuments(String string, String string2) { 
    SearchResponse sResponse = client.prepareSearch(string) 
        .setTypes(string2) 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        .setQuery(QueryBuilders.matchAllQuery()) 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    List<String> ids = new ArrayList<String>(hitArray.length); 
    for(int i = 0; i < count; i++) { 
      System.out.println("=================================="); 
      SearchHit hit = hitArray[i]; 
      ids.add(hit.getId()); 
       
    } 
    for(String id : ids) { 
      DeleteResponse response = client.prepareDelete(string, string2, id).execute().actionGet(); 
    }   
  } 
  private static void dateRangeSearch(String indexName, String typeName, 
      String termName, String from, String to) { 
    // 构建range query 
    //2015-08-20 12:27:11 
        QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(from).to(to); 
        SearchResponse sResponse = client.prepareSearch(indexName) 
            .setTypes(typeName) 
            // 设置search type 
            // 常用search type用:query_then_fetch 
            // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
            .setSearchType(SearchType.QUERY_THEN_FETCH) 
            // 查询的termName和termvalue 
            .setQuery(qb) 
            // 设置排序field 
            .addSort(termName, SortOrder.DESC) 
            // 设置分页 
            .setFrom(0).setSize(60).execute().actionGet(); 
        int tShards = sResponse.getTotalShards(); 
        long timeCost = sResponse.getTookInMillis(); 
        int sShards = sResponse.getSuccessfulShards(); 
        System.out.println(tShards + "," + timeCost + "," + sShards); 
        SearchHits hits = sResponse.getHits(); 
        long count = hits.getTotalHits(); 
        SearchHit[] hitArray = hits.getHits(); 
        for (int i = 0; i < count; i++) { 
          SearchHit hit = hitArray[i]; 
          Map<String, Object> fields = hit.getSource(); 
          for (String key : fields.keySet()) { 
            System.out.println(key); 
            System.out.println(fields.get(key)); 
          } 
        } 
  } 
  private static void dateRangeSearch2(String indexName, String typeName, 
      String termName, String from, String to) { 
    // 构建range query 
      QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(from).to(to); 
      SearchResponse sResponse = client.prepareSearch(indexName) 
          .setTypes(typeName) 
          // 设置search type 
          // 常用search type用:query_then_fetch 
          // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
          .setSearchType(SearchType.QUERY_THEN_FETCH) 
          // 查询的termName和termvalue 
          .setQuery(qb) 
          // 设置排序field 
          .addSort(termName, SortOrder.DESC) 
          // 设置分页 
          .setFrom(0).setSize(60).execute().actionGet(); 
      int tShards = sResponse.getTotalShards(); 
      long timeCost = sResponse.getTookInMillis(); 
      int sShards = sResponse.getSuccessfulShards(); 
      System.out.println(tShards + "," + timeCost + "," + sShards); 
      SearchHits hits = sResponse.getHits(); 
      long count = hits.getTotalHits(); 
      SearchHit[] hitArray = hits.getHits(); 
      for (int i = 0; i < count; i++) { 
        SearchHit hit = hitArray[i]; 
        Map<String, Object> fields = hit.getSource(); 
        for (String key : fields.keySet()) { 
          System.out.println(key); 
          System.out.println(fields.get(key)); 
        } 
      } 
  } 
  private static void countWithQuery(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
    //search result get source 
        CountResponse cResponse = client.prepareCount(indexName) 
            .setTypes(typeName) 
            .setQuery(QueryBuilders.termQuery(termName, termValue)) 
            .execute() 
            .actionGet(); 
        int tShards = cResponse.getTotalShards(); 
        int sShards = cResponse.getSuccessfulShards(); 
        System.out.println(tShards+","+sShards); 
        long count = cResponse.getCount();    
  } 
  private static void rangeSearchWithOtherSearch(String indexName, String typeName, 
      String termName, String min, String max, String termQueryField) { 
    // 构建range query 
        QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(min).to(max); 
        TermQueryBuilder tb = QueryBuilders.termQuery(termName, termQueryField); 
        BoolQueryBuilder bq = boolQuery().must(qb).must(tb); 
        SearchResponse sResponse = client.prepareSearch(indexName) 
            .setTypes(typeName) 
            // 设置search type 
            // 常用search type用:query_then_fetch 
            // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
            .setSearchType(SearchType.QUERY_THEN_FETCH) 
            // 查询的termName和termvalue 
            .setQuery(bq) 
            // 设置排序field 
            .addSort(termName, SortOrder.DESC) 
            // 设置分页 
            .setFrom(0).setSize(60).execute().actionGet(); 
        int tShards = sResponse.getTotalShards(); 
        long timeCost = sResponse.getTookInMillis(); 
        int sShards = sResponse.getSuccessfulShards(); 
        System.out.println(tShards + "," + timeCost + "," + sShards); 
        SearchHits hits = sResponse.getHits(); 
        long count = hits.getTotalHits(); 
        SearchHit[] hitArray = hits.getHits(); 
        for (int i = 0; i < count; i++) { 
          SearchHit hit = hitArray[i]; 
          Map<String, Object> fields = hit.getSource(); 
          for (String key : fields.keySet()) { 
            System.out.println(key); 
            System.out.println(fields.get(key)); 
          } 
        } 
  } 
  private static void termRangeSearch(String indexName, String typeName, 
    String termName, String min, String max, String highlightField) { 
    QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(min).to(max); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        // 设置search type 
        // 常用search type用:query_then_fetch 
        // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        // 查询的termName和termvalue 
        .setQuery(qb) 
        // 设置排序field 
        .addSort(termName, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        // 设置分页 
        .setFrom(0).setSize(60).execute().actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards + "," + timeCost + "," + sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for (int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for (String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  private static void sumOneField(String indexName, String typeName, String fieldName) { 
    SumBuilder sb = AggregationBuilders.sum("sum").field(fieldName); 
    //search result get source 
    SearchResponse sResponse = client.prepareSearch(indexName).setTypes(typeName).addAggregation(sb).execute().actionGet(); 
    Map<String, Aggregation> aggMap = sResponse.getAggregations().asMap(); 
    if(aggMap != null && aggMap.size() > 0) { 
      for(String key : aggMap.keySet()) { 
        Sum s = (Sum)aggMap.get(key); 
        System.out.println(s.getValue()); 
      } 
    } 
  } 
  private static void searchWithTermQueryAndRetureSpecifiedFields(String indexName, String typeName, String termName,String termValue, String sortField, String highlightField,String... fields) { 
     SearchRequestBuilder sb = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        // 设置search type 
        // 常用search type用:query_then_fetch 
        // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        // 查询的termName和termvalue 
        .setQuery(QueryBuilders.termQuery(termName, termValue)) 
        // 设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        // 设置高亮field 
        .addHighlightedField(highlightField) 
        // 设置分页 
        .setFrom(0).setSize(60); 
    for (String field : fields) { 
      sb.addField(field); 
    } 
    SearchResponse sResponse = sb.execute().actionGet(); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for (int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, SearchHitField> fm = hit.getFields(); 
      for (String key : fm.keySet()) { 
        SearchHitField f = fm.get(key); 
        System.out.println(f.getName()); 
        System.out.println(f.getValue()); 
      } 
    } 
  } 
  private static void searchWithIds(String indexName, String typeName, String sortField, String highlightField, String... ids) { 
    IdsQueryBuilder qb = QueryBuilders.idsQuery().addIds(ids); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(qb) 
        //设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
 
  /** 
   * 在index:indexName, type:typeName中做通配符查询 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param termValue 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void wildcardSearch(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
    QueryBuilder qb = QueryBuilders.wildcardQuery(termName, termValue); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(qb) 
        //设置排序field 
//       .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
//       .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
 
  /** 
   * 在index:indexName, type:typeName中做模糊查询 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param termValue 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void fuzzySearch(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
     QueryBuilder qb = QueryBuilders.fuzzyQuery(termName, termValue); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(qb) 
        //设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 在index:indexName, type:typeName中做区间查询 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param min 
   * @param max 
   * @param highlightField 
   */ 
  private static void numericRangeSearch(String indexName, String typeName, 
      String termName, double min, double max, String highlightField) { 
    // 构建range query 
    QueryBuilder qb = QueryBuilders.rangeQuery(termName).from(min).to(max); 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        // 设置search type 
        // 常用search type用:query_then_fetch 
        // query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        // 查询的termName和termvalue 
        .setQuery(qb) 
        // 设置排序field 
        .addSort(termName, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        // 设置分页 
        .setFrom(0).setSize(60).execute().actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards + "," + timeCost + "," + sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for (int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for (String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 在索引indexName, type为typeName中查找两个term:term1(termName1, termValue1)和term2(termName2, termValue2) 
   * @param indexName 
   * @param typeName 
   * @param termName1 
   * @param termValue1 
   * @param termName2 
   * @param termValue2 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void searchWithBooleanQuery(String indexName, String typeName, String termName1, String termValue1,  
       String termName2, String termValue2, String sortField, String highlightField) { 
    //构建boolean query 
    BoolQueryBuilder bq = boolQuery().must(termQuery(termName1, termValue1)).must(termQuery(termName2, termValue2));   
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(bq) 
        //设置排序field 
        .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
        .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    System.out.println(tShards+","+timeCost+","+sShards); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 在索引indexName, type为typeName中查找term(termName, termValue) 
   * @param indexName 
   * @param typeName 
   * @param termName 
   * @param termValue 
   * @param sortField 
   * @param highlightField 
   */ 
  private static void searchWithTermQuery(String indexName, String typeName, String termName, String termValue, String sortField, String highlightField) { 
    SearchResponse sResponse = client.prepareSearch(indexName) 
        .setTypes(typeName) 
        //设置search type 
        //常用search type用:query_then_fetch 
        //query_then_fetch是先查到相关结构,然后聚合不同node上的结果后排序 
        .setSearchType(SearchType.QUERY_THEN_FETCH) 
        //查询的termName和termvalue 
        .setQuery(QueryBuilders.termQuery(termName, termValue)) 
        //设置排序field 
//       .addSort(sortField, SortOrder.DESC) 
        //设置高亮field 
//       .addHighlightedField(highlightField) 
        //设置分页 
        .setFrom(0).setSize(60) 
        .execute() 
        .actionGet(); 
    int tShards = sResponse.getTotalShards(); 
    long timeCost = sResponse.getTookInMillis(); 
    int sShards = sResponse.getSuccessfulShards(); 
    SearchHits hits = sResponse.getHits(); 
    long count = hits.getTotalHits(); 
    SearchHit[] hitArray = hits.getHits(); 
    for(int i = 0; i < count; i++) { 
      System.out.println("=================================="); 
      SearchHit hit = hitArray[i]; 
      Map<String, Object> fields = hit.getSource(); 
      for(String key : fields.keySet()) { 
        System.out.println(key); 
        System.out.println(fields.get(key)); 
      } 
    } 
  } 
  /** 
   * 用java的map构建document 
   */ 
  private static void indexWithMap(String indexName, String typeName) { 
    Map<String, Object> json = new HashMap<String, Object>(); 
    //设置document的field 
    json.put("user","kimchy2"); 
    json.put("postDate",new Date()); 
    json.put("price",6.4); 
    json.put("message","Elasticsearch"); 
    json.put("tid","10002"); 
    json.put("endTime","2015-08-25 09:00:00"); 
    //指定索引名称,type名称和documentId(documentId可选,不设置则系统自动生成)创建document 
    IndexResponse response = client.prepareIndex(indexName, typeName, "2").setSource(json).execute().actionGet(); 
    //response中返回索引名称,type名称,doc的Id和版本信息 
    String index = response.getIndex(); 
    String type = response.getType(); 
    String id = response.getId(); 
    long version = response.getVersion(); 
    boolean created = response.isCreated(); 
    System.out.println(index+","+type+","+id+","+version+","+created); 
  } 
 
  /** 
   * 用java字符串创建document 
   */ 
  private static void indexWithStr(String indexName, String typeName) { 
    //手工构建json字符串 
    //该document包含user, postData和message三个field 
    String json = "{" + "\"user\":\"kimchy\"," + "\"postDate\":\"2013-01-30\"," + "\"price\":\"6.3\"," + "\"tid\":\"10001\"," + "}"; 
    //指定索引名称,type名称和documentId(documentId可选,不设置则系统自动生成)创建document 
    IndexResponse response = client.prepareIndex(indexName, typeName, "1") 
        .setSource(json) 
        .execute() 
        .actionGet(); 
    //response中返回索引名称,type名称,doc的Id和版本信息 
    String index = response.getIndex(); 
    String type = response.getType(); 
    String id = response.getId(); 
    long version = response.getVersion(); 
    boolean created = response.isCreated(); 
    System.out.println(index+","+type+","+id+","+version+","+created); 
  } 
   
  private static void deleteDocWithId(String indexName, String typeName, String docId) { 
    DeleteResponse dResponse = client.prepareDelete(indexName, typeName, docId).execute().actionGet(); 
    String index = dResponse.getIndex(); 
    String type = dResponse.getType(); 
    String id = dResponse.getId(); 
    long version = dResponse.getVersion(); 
    System.out.println(index+","+type+","+id+","+version); 
  } 
   
  /** 
   * 创建索引 
   * 注意:在生产环节中通知es集群的owner去创建index 
   * @param client 
   * @param indexName 
   * @param documentType 
   * @throws IOException 
   */ 
  private static void createIndex(String indexName, String documentType) throws IOException { 
    final IndicesExistsResponse iRes = client.admin().indices().prepareExists(indexName).execute().actionGet(); 
    if (iRes.isExists()) { 
      client.admin().indices().prepareDelete(indexName).execute().actionGet(); 
    } 
    client.admin().indices().prepareCreate(indexName).setSettings(Settings.settingsBuilder().put("number_of_shards", 1).put("number_of_replicas", "0")).execute().actionGet(); 
    XContentBuilder mapping = jsonBuilder() 
        .startObject() 
           .startObject(documentType) 
//           .startObject("_routing").field("path","tid").field("required", "true").endObject() 
           .startObject("_source").field("enabled", "true").endObject() 
           .startObject("_all").field("enabled", "false").endObject() 
             .startObject("properties") 
               .startObject("user") 
                 .field("store", true) 
                 .field("type", "string") 
                 .field("index", "not_analyzed") 
                .endObject() 
                .startObject("message") 
                 .field("store", true) 
                 .field("type","string") 
                 .field("index", "analyzed") 
                 .field("analyzer", "standard") 
                .endObject() 
                .startObject("price") 
                 .field("store", true) 
                 .field("type", "float") 
                .endObject() 
                .startObject("nv1") 
                 .field("store", true) 
                 .field("type", "integer") 
                 .field("index", "no") 
                 .field("null_value", 0) 
                .endObject() 
                .startObject("nv2") 
                 .field("store", true) 
                 .field("type", "integer") 
                 .field("index", "not_analyzed") 
                 .field("null_value", 10) 
                .endObject() 
                .startObject("tid") 
                 .field("store", true) 
                 .field("type", "string") 
                 .field("index", "not_analyzed") 
                .endObject() 
                .startObject("endTime") 
                 .field("type", "date") 
                 .field("store", true) 
                 .field("index", "not_analyzed") 
                 .field("format", "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd&#39;T&#39;HH:mm:ss.SSSZ") 
                .endObject() 
                .startObject("date") 
                 .field("type", "date") 
                .endObject() 
             .endObject() 
           .endObject() 
          .endObject(); 
    client.admin().indices() 
        .preparePutMapping(indexName) 
        .setType(documentType) 
        .setSource(mapping) 
        .execute().actionGet(); 
  } 
}

[Related recommendations]

1.

Detailed explanation of usage code examples of Spring framework annotations

2 .

Java transaction management learning: Detailed code explanation of Spring and Hibernate

3.

Sharing example tutorials on using Spring Boot to develop Restful programs

4.

Detailed explanation of the seven return methods supported by spring mvc

5.

Detailed explanation of Spring's enhanced implementation examples based on Aspect

6. PHPRPC for Java Spring example

7. Java Spring mvc operation Redis and Redis cluster

The above is the detailed content of Detailed explanation of practical tutorials on using Elasticsearch in spring. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Why is Java a popular choice for developing cross-platform desktop applications?Why is Java a popular choice for developing cross-platform desktop applications?Apr 25, 2025 am 12:23 AM

Javaispopularforcross-platformdesktopapplicationsduetoits"WriteOnce,RunAnywhere"philosophy.1)ItusesbytecodethatrunsonanyJVM-equippedplatform.2)LibrarieslikeSwingandJavaFXhelpcreatenative-lookingUIs.3)Itsextensivestandardlibrarysupportscompr

Discuss situations where writing platform-specific code in Java might be necessary.Discuss situations where writing platform-specific code in Java might be necessary.Apr 25, 2025 am 12:22 AM

Reasons for writing platform-specific code in Java include access to specific operating system features, interacting with specific hardware, and optimizing performance. 1) Use JNA or JNI to access the Windows registry; 2) Interact with Linux-specific hardware drivers through JNI; 3) Use Metal to optimize gaming performance on macOS through JNI. Nevertheless, writing platform-specific code can affect the portability of the code, increase complexity, and potentially pose performance overhead and security risks.

What are the future trends in Java development that relate to platform independence?What are the future trends in Java development that relate to platform independence?Apr 25, 2025 am 12:12 AM

Java will further enhance platform independence through cloud-native applications, multi-platform deployment and cross-language interoperability. 1) Cloud native applications will use GraalVM and Quarkus to increase startup speed. 2) Java will be extended to embedded devices, mobile devices and quantum computers. 3) Through GraalVM, Java will seamlessly integrate with languages ​​such as Python and JavaScript to enhance cross-language interoperability.

How does the strong typing of Java contribute to platform independence?How does the strong typing of Java contribute to platform independence?Apr 25, 2025 am 12:11 AM

Java's strong typed system ensures platform independence through type safety, unified type conversion and polymorphism. 1) Type safety performs type checking at compile time to avoid runtime errors; 2) Unified type conversion rules are consistent across all platforms; 3) Polymorphism and interface mechanisms make the code behave consistently on different platforms.

Explain how Java Native Interface (JNI) can compromise platform independence.Explain how Java Native Interface (JNI) can compromise platform independence.Apr 25, 2025 am 12:07 AM

JNI will destroy Java's platform independence. 1) JNI requires local libraries for a specific platform, 2) local code needs to be compiled and linked on the target platform, 3) Different versions of the operating system or JVM may require different local library versions, 4) local code may introduce security vulnerabilities or cause program crashes.

Are there any emerging technologies that threaten or enhance Java's platform independence?Are there any emerging technologies that threaten or enhance Java's platform independence?Apr 24, 2025 am 12:11 AM

Emerging technologies pose both threats and enhancements to Java's platform independence. 1) Cloud computing and containerization technologies such as Docker enhance Java's platform independence, but need to be optimized to adapt to different cloud environments. 2) WebAssembly compiles Java code through GraalVM, extending its platform independence, but it needs to compete with other languages ​​for performance.

What are the different implementations of the JVM, and do they all provide the same level of platform independence?What are the different implementations of the JVM, and do they all provide the same level of platform independence?Apr 24, 2025 am 12:10 AM

Different JVM implementations can provide platform independence, but their performance is slightly different. 1. OracleHotSpot and OpenJDKJVM perform similarly in platform independence, but OpenJDK may require additional configuration. 2. IBMJ9JVM performs optimization on specific operating systems. 3. GraalVM supports multiple languages ​​and requires additional configuration. 4. AzulZingJVM requires specific platform adjustments.

How does platform independence reduce development costs and time?How does platform independence reduce development costs and time?Apr 24, 2025 am 12:08 AM

Platform independence reduces development costs and shortens development time by running the same set of code on multiple operating systems. Specifically, it is manifested as: 1. Reduce development time, only one set of code is required; 2. Reduce maintenance costs and unify the testing process; 3. Quick iteration and team collaboration to simplify the deployment process.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software