Home  >  Article  >  Java  >  How to use Java tool classes to write reports efficiently

How to use Java tool classes to write reports efficiently

WBOY
WBOYforward
2023-04-14 19:16:101101browse

Why use java code to write reports

For report data, in most cases, writing sql is used to provide data sources for large screens/reports, but in some complex cases, it cannot be achieved using sql alone, or When it is difficult to implement, complex logic will be implemented through code and the result will eventually be returned.

Problems encountered

For relatively complex reports, it is often necessary to perform data connection, that is, table-to-table join, grouping, calculation and other operations. SQL naturally supports these operations and is easy to implement. But when we need to connect data in java code, the native support is not so friendly. We often implement it like this

Now there are two collections

List<ContractDetail> contractDetails; // 合同明细集合,合同会重复
List<ContractInfo> contractInfos; // 合同主要信息,不会有重复合同

Corresponding data structure

public class ContractDetail {
    /**
     * 合同编号
     */
    private String contractNo;
    /**
     * 总金额
     */
    private BigDecimal moneyTotal;
}
public class ContractInfo {
    /**
     * 合同编号
     */
    private String contractNo;
    /**
     * 状态
     */
    private String status;
}

Requirements

contractDetails Associate contractInfos according to contractNo, filter out the data with status = 'Signed'

Then group according to contractNo in contractDetails, and find the correspondence of each contractNo respectively The sum of moneyTotal

The final output should be a map

Map<String /* 合同编码 */, BigDecimal /* 对应moneyTotal之和 */> result;

Usually we will implement it like this

//  setp 1 过滤出 已签订状态的合同编码
Set<String> stopContract = contractInfos.stream()
                .filter(it -> "已签订".equals(it.getStatus()))
                .map(ContractInfo::getContractNo).collect(Collectors.toSet());
//step2 根据 step1的合同编码集合过滤出状态正确的contractDetail
  contractDetails = contractDetails.stream()
                .filter(it -> stopContract.contains(it.getContractNo()))
                .collect(Collectors.toList());
//step3 根据contractNo分别累加对应的moneyTotal
 Map<String, BigDecimal> result = new HashMap<>();
 contractDetails.stream().forEach(it -> {
            BigDecimal moneyTotal = Optional.ofNullable(result.get(it.getContractNo()))
                    .orElse(BigDecimal.ZERO);
            moneyTotal = moneyTotal.add(it.getMoneyTotal() != null ? it.getMoneyTotal() : BigDecimal.ZERO);
            result.put(it.getContractNo(), moneyTotal);
        });

Obviously this implementation is more complicated, because using sql is nothing more than Join is followed by group by grouping. Sum. This problem can be easily solved. Then take a look at the following tool class and think about whether there is a simpler way to implement it.

Tool Class

CollectionDataStream

The function of collection data stream CollectionDataStream is to associate collections through interfaces, and implement two operations similar to sql join and left join

And implement the function of converting to and from Stream in java.

The aggregation data structure converts the collection into a data structure similar to a table structure, including the table name and data

public class AggregationData {
    Map<String, Map> aggregationMap;
    private AggregationData(){
        aggregationMap = new HashMap<>();
    }
    //key 为别名,value为对应对象
    public AggregationData(String tableName, Object data) {
        aggregationMap = new HashMap<>();
        aggregationMap.put(tableName, BeanUtil.beanToMap(data));
    }
    public Map<String, Map> getRowAllData() {
        return aggregationMap;
    }
    public Map getTableData(String tableName) {
        if (!aggregationMap.containsKey(tableName)) {
            throw new DataStreamException(tableName + ".not.exists");
        }
        return aggregationMap.get(tableName);
    }
    public void setTableData(String tableName, Object data) {
        if(aggregationMap.containsKey(tableName)){
            throw new DataStreamException(tableName+".has.been.exists!");
        }
        aggregationMap.put(tableName, BeanUtil.beanToMap(data));
    }
    private void setTableData(String tableName, Map<String, Object> data) {
        Map<String, Object> tableData =
                Optional.ofNullable(aggregationMap.get(tableName)).orElse(new HashMap<String, Object>());
        tableData.putAll(data);
        aggregationMap.put(tableName, tableData);
    }
    public AggregationData copyAggregationData() {
        AggregationData aggregationData = new AggregationData();
        for (String tableName : this.getRowAllData().keySet()) {
            aggregationData.setTableData(tableName, this.getRowAllData().get(tableName));
        }
        return aggregationData;
    }
}

AggregationData represents a row of data, the key of aggregationMap is the table name, and the value is the corresponding data

Let’s take a detailed look at this interface

import java.util.Collection;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Stream;
public interface CollectionDataStream<T> {
    /**
     *将集合转化为数据流,并给一个别名
     * @param tableName
     * @param collection
     * @return
     */
    static CollectionDataStream<AggregationData> of(String tableName, Collection<?> collection) {
        return new CollectionDataStreamImpl(tableName, collection);
    }
    /**
     *将 Stream转化为数据流,并给一个别名
     * @param tableName
     * @param collection
     * @return
     */
    static CollectionDataStream<AggregationData> of(String tableName, Stream<?> collection) {
        return new CollectionDataStreamImpl(tableName, collection);
    }
    /**
     * 内连接,可自定义连接条件,使用双循环
     *
     * @param tableName
     * @param collection
     * @param predict
     * @param <T1>
     * @return
     */
    <T1> CollectionDataStream<T> join(String tableName, Collection<T1> collection, JoinPredicate<T, T1> predict);
    /**
     * 等值内连接,使用map优化
     *
     * @param collection
     * @param tableName
     * @param aggregationMapper
     * @param dataValueMapper
     * @param <T1>
     * @param <R>
     * @return
     */
    //等值条件推荐用法
    <T1, R> CollectionDataStream<T> joinUseHashOnEqualCondition(String tableName, Collection<T1> collection, Function<T, R> aggregationMapper, Function<T1, R> dataValueMapper);
    /**
     * 左连接,可自定义连接条件,使用双循环
     *
     * @param tableName
     * @param collection
     * @param predict
     * @param <T1>
     * @return
     */
    <T1> CollectionDataStream<T> leftJoin(String tableName, Collection<T1> collection, JoinPredicate<T, T1> predict);
    /**
     * 等值左连接,使用map优化
     *
     * @param collection
     * @param tableName
     * @param aggregationMapper
     * @param dataValueMapper
     * @param <T1>
     * @param <R>
     * @return
     */
    <T1, R> CollectionDataStream<T> leftJoinUseHashOnEqualCondition( String tableName, Collection<T1> collection,Function<T, R> aggregationMapper, Function<T1, R> dataValueMapper);
    Stream<T> toStream();
    Stream<Map> toStream(String tableName);
    <R> Stream<R> toStream(String tableName, Class<R> clzz);
    <R> Stream<R> toStream(Function<AggregationData, R> mapper);
}

Pay attention to the difference between the joinUseHashOnEqualCondition and join methods.

If the connection between sets is an equal value connection of a certain field, then use joinUseHashOnEqualCondition, which internally uses map grouping to join. If you use join directly, the connection conditions can be customized, but the condition is judged through double loops, which is less efficient. Therefore, in the case of equal values, it is more efficient to use joinUseHashOnEqualCondition.

How to use

Or take the above requirements as an example

First connect the two collections

 CollectionDataStream.of("t1", contractDetails) .joinUseHashOnEqualCondition(
                        contractInfos.stream().filter(it -> "已签订".equals(it.getStatus())).collect(Collectors.toList()),
                        "t2",
                        agg -> agg.getTableData("t1").get("contractNo"),
                        ContractInfo::getContractNo
                );

Code analysis

CollectionDataStream.of("t1", contractDetails)

is to convert the collection contractDetails into a data stream with the table name t1,

 .joinUseHashOnEqualCondition(
                        contractInfos.stream().filter(
                          "t2",
                            it -> "已签订".equals(it.getStatus())).collect(Collectors.toList()),
                        agg -> agg.getTableData("t1").get("contractNo"),
                        ContractInfo::getContractNo
                );

internally connects contractInfos, and gives contractInfos an alias t2. The connection condition is the equivalent connection of contractNo of t1 and contractNol of contractInfos. The new aggregated data stream

Of course can also be implemented using a custom connection

CollectionDataStream.of("t1", contractDetails)
                .join("t2",
                        contractInfos.stream().filter(it -> "已签订".equals(it.getStatus())).collect(Collectors.toList()),
                        (agg, data) -> agg.getTableData("t1").get("contractNo").equals(data.getContractNo())
                )

Here through the inner connection, it also plays a filtering role. After the connection is completed, we still need to group for calculation, so we need to use the next tool class

MyCollectors

is an extension of the native Collectors in stram, and implements more commonly used groupings for reporting. Some operations,

MyCollectorspackage collector;
import utils.NumberUtil;
import java.math.BigDecimal;
import java.util.Comparator;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Collector;
import java.util.stream.Collectors;
public class MyCollectors {
    /**
     * 返回一个Collector用于对集合进行分组并且,对于组内有多个元素,只返回最后一个,其他的忽略
     * 适用于明确分组key唯一的情况,value可为空
     * 谨慎使用,如果分组有多条,会丢失数据!!!
     * @param keyMapper
     * @param <T>
     * @param <K>
     * @param <U>
     * @param <M>
     * @return
     */
    public static <T, K, U, M extends Map<K, U>>
    Collector<T, ?, Map<K, U>> groupingByLast(Function<? super T, ? extends K> keyMapper,
                                               Function<? super T, ? extends U> valueMapper) {
        return Collectors.groupingBy(keyMapper, Collectors.reducing(null, valueMapper, (o1, o2) -> o2));
    }
    /**
     * 传入一个keyMaper和一个比较器
     * 根据key分组,组内使用比较器进行比较,最终得到一个最大结果
     * @param keyMapper
     * @param comparator
     * @param <T>
     * @param <K>
     * @param <U>
     * @param <M>
     * @return
     */
    public static <T, K, U, M extends Map<K, U>>
    Collector<T, ?, Map<K, T>> groupingByMaxComparator(Function<? super T, ? extends K> keyMapper,
                                                      Comparator<T> comparator) {
        return Collectors.groupingBy(keyMapper, Collectors.collectingAndThen(Collectors.maxBy(comparator), it -> it.orElse(null)));
    }
    /**
     * 传入一个keyMaper和一个比较器
     * 根据key分组,组内使用比较器进行比较,最终得到一个最小结果
     * @param keyMapper
     * @param comparator
     * @param <T>
     * @param <K>
     * @param <U>
     * @param <M>
     * @return
     */
    public static <T, K, U, M extends Map<K, U>>
    Collector<T, ?, Map<K, T>> groupingByMinComparator(Function<? super T, ? extends K> keyMapper,
                                                       Comparator<T> comparator) {
        return Collectors.groupingBy(keyMapper, Collectors.collectingAndThen(Collectors.maxBy(comparator), it -> it.orElse(null)));
    }
    /**
     * 分组后组内按照指定字段求和
     * @param keyMapper
     * @param <T>
     * @param <K>
     * @return
     */
    public static <T, K>
    Collector<T, ?, Map<K, BigDecimal>> groupingAndSum(Function<? super T, ? extends K> keyMapper,
                                                       Function<? super T, BigDecimal> valueMapper) {
        return Collectors.groupingBy(keyMapper, Collectors.reducing(BigDecimal.ZERO, valueMapper, NumberUtil::addNumbers));
    }
    /**
     * 根据对象某个字段进行求和
     * @param mapper
     * @param <T>
     * @return
     */
    public static <T>
    Collector<T, ?, BigDecimal> sumByField(Function<? super T, ? extends BigDecimal> mapper) {
        return Collectors.reducing(BigDecimal.ZERO, mapper, NumberUtil::addNumbers);
    }
    /**
     * 求和
     */
    public static Collector<BigDecimal, ?, BigDecimal> sum() {
        return Collectors.reducing(BigDecimal.ZERO, NumberUtil::addNumbers);
    }
}

implementation using combination

 Map result = CollectionDataStream.of("t1", contractDetails)
                .joinUseHashOnEqualCondition(
                        contractInfos.stream().filter(it -> "60".equals(it.getStatus())).collect(Collectors.toList()),
                        "t2",
                        agg -> agg.getTableData("t1").get("contractNo"),
                        ContractInfo::getContractNo
                ).toStream("s1", ContractDetail.class)//将数据流转换为 java原生Stream
                .collect(MyCollectors.groupingAndSum(ContractDetail::getContractNo, ContractDetail::getMoneyTotal));

This kind of implementation is obviously simpler, also reduces the probability of errors, reduces the amount of code, and improves efficiency.

Advantages

  • It realizes the connection operation between collections, and it is a streaming operation, which can continuously connect multiple collections at one go.

  • Implements conversion to and from Stream. Various complex operations can be implemented using the functions of stream, such as filtering, conversion, grouping, etc.

  • There is a certain guarantee in efficiency. Map optimization is used for equal joins, and when inner joins are made, small tables are considered to be connected to large tables for optimization, and in some cases the reduction is reduced. Number of cycles, use BeanMap under cglib to reduce memory usage and performance consumption when converting beans into row aggregate data

The above is the detailed content of How to use Java tool classes to write reports efficiently. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete