search
HomeJavajavaTutorialHierarchical data with PostgreSQL and Spring Data JPA

He who plants a tree,
Plants a hope.
       Plant a tree by Lucy Larcom ?

Intro

In this post I'm going to show you a couple of options for managing hierarchical data represented as a tree data structure. This is the natural approach when you need to implement things like:

  • file system paths
  • organisational charts
  • discussion forum comments
  • a more contemporary topic: small2big retrieval for RAG applications

If you know what a graph is already, a tree is basically a graph without any cycles. You can represent one visually like this.

Hierarchical data with PostgreSQL and Spring Data JPA

There are multiple alternatives for storing trees in relational databases. In the sections below, I'll show you three of them:

  • adjacency list
  • materialized paths
  • nested sets

There will be two parts to this blog post. In this first one the alternatives are introduced and you see how to load and store data - the basics. Having that out of the way, in the second part, the focus is more on their comparison and trade-offs, for example I want to look at what happens at increased data volumes and what are the appropriate indexing strategies.

All the code you'll see in the sections below can be found here if you're interested to check it out.

The running use-case will be that of employees and their managers, and the IDs for each will be exactly the ones you saw in the tree visualisation I showed above.

Local environment

I'm using the recently released Postgres 17 with Testcontainers. This gives me a repeatable setup to work with. For example, we can use initialisation SQL scripts to automate the creation of a Postgres database with the necessary tables and populate with some test data.

@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {

    private static final String POSTGRES = "postgres";

    @Bean
    @ServiceConnection
    PostgreSQLContainer> postgresContainer() {
        return new PostgreSQLContainer(DockerImageName.parse("postgres:latest"))
                .withUsername(POSTGRES)
                .withPassword(POSTGRES)
                .withDatabaseName(POSTGRES)
                .withInitScript("init-script.sql");
    }
}

Let's jump in and have a look at the first approach.

1. The adjacency list model

This was the first solution for managing hierarchical data, so we can expect that it's still widely present in codebases, therefore chances are that you might encounter it sometime. The idea is that we store the manager's, or more generically said, the parent ID in the same row. It will quickly become clear once we look at the table structure.

Schema

The table corresponding to the adjacency list option looks like this:

create table employees
(
    id           bigserial primary key,
    manager_id   bigint references employees
    name         text,
);

In addition to the above, in order to ensure data integrity, we should also write constraint checks that ensure at least the following:

  • there is a single parent for every node
  • no cycles

Generating test data

Especially for Part 2 of this series, we need a way to generate as much data as we want for populating the schema. Let's do it at first step by step for clarity, then afterwards recursively.

Step by step

We start simple by inserting three levels of employees in the hierarchy explicitly.

You might know already about CTEs in Postgres - they are auxiliary named queries executed within the context of a main query. Below, you can see how I construct each level on the basis of the level before.

@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {

    private static final String POSTGRES = "postgres";

    @Bean
    @ServiceConnection
    PostgreSQLContainer> postgresContainer() {
        return new PostgreSQLContainer(DockerImageName.parse("postgres:latest"))
                .withUsername(POSTGRES)
                .withPassword(POSTGRES)
                .withDatabaseName(POSTGRES)
                .withInitScript("init-script.sql");
    }
}

Let's verify that it works as expected so far, and for this purpose do a count to see how many elements have been inserted. You can compare it with the number of nodes in the tree visualisation I showed at the beginning of this post.

create table employees
(
    id           bigserial primary key,
    manager_id   bigint references employees
    name         text,
);

Looks alright! Three levels, and in total we get 15 nodes.

Time to move on to the recursive approach.

Recursive

Writing recursive queries follows a standard procedure. We define a base step and a recursive step then "connect" them to each other using union all. At runtime Postgres will follow this recipe and generate all our results. Have a look.

with root as (
  insert into 
    employees(manager_id, name)
      select 
        null, 
        'root' || md5(random()::text) 
      from  
        generate_series(1, 1) g
      returning 
        employees.id
  ),
  first_level as (
    insert into 
      employees(manager_id, name)
        select 
          root.id, 
          'first_level' || md5(random()::text) 
        from 
          generate_series(1, 2) g, 
          root
        returning 
          employees.id
  ),
  second_level as (
    insert into 
      employees(manager_id, name)
        select 
          first_level.id, 
          'second_level' || md5(random()::text) 
        from 
          generate_series(1, 2) g, 
          first_level
        returning 
          employees.id
  )
insert into 
  employees(manager_id, name)
select 
  second_level.id, 
  'third_level' || md5(random()::text) 
from 
  generate_series(1, 2) g, 
  second_level;

After running it, let's do a count again to see if the same number of elements are inserted.

postgres=# select count(*) from employees;
 count 
-------
 15
(1 row)

Cool! We're in business. We can now populate the schema with however many levels and elements we want, and thus, completely control the inserted volume. No worries if for now recursive queries look a bit difficult still, we'll actually revisit them a bit later with the occasion of writing the queries to retrieve the data.

For now, let's proceed to have a look at the Hibernate entity we can use to map our table to a Java class.

create temporary sequence employees_id_seq;
insert into employees (id, manager_id, name)
with recursive t(id, parent_id, level, name) AS
(
  select 
    nextval('employees_id_seq')::bigint,
    null::bigint, 
    1, 
    'root' from generate_series(1,1) g

    union all

    select 
      nextval('employees_id_seq')::bigint, 
      t.id, 
      level+1, 
      'level' || level || '-' || md5(random()::text) 
    from 
      t, 
      generate_series(1,2) g
    where 
      level 



Nothing special, just a one-to-many relationship between managers and employees. You saw this coming. Let's start querying.

Descendants

All subordinates of a manager

For retrieving all employees which are subordinates of a specific manager referenced by her ID, we'll write a recursive query again. You'll see again a base step and a recursive step that is linked up with the base step. Postgres will then repeat this and retrieve all the relevant rows for the query. Let's take the employee with ID = 2 for example. This is a visual representation which makes it hopefully easier to understand what I've just described. I haven't included all the results, just the first few.

Hierarchical data with PostgreSQL and Spring Data JPA

Here's the JPQL query for querying descendants:

@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {

    private static final String POSTGRES = "postgres";

    @Bean
    @ServiceConnection
    PostgreSQLContainer> postgresContainer() {
        return new PostgreSQLContainer(DockerImageName.parse("postgres:latest"))
                .withUsername(POSTGRES)
                .withPassword(POSTGRES)
                .withDatabaseName(POSTGRES)
                .withInitScript("init-script.sql");
    }
}

In queries such as the above one, in order to make them cleaner and avoid needing to write the fully qualified name of the record we will write the results into, we can use the hypersistence-utils library to write a ClassImportIntegratorProvider:

create table employees
(
    id           bigserial primary key,
    manager_id   bigint references employees
    name         text,
);

Reviewing the generated queries

It works, but let's have a deeper look at what Hibernate generated. It's always good to understand what's happening under the hood, otherwise we might incur inefficiencies that will happen with every user request, this will add up.

We'll have to start the Spring Boot app with the following setting:

with root as (
  insert into 
    employees(manager_id, name)
      select 
        null, 
        'root' || md5(random()::text) 
      from  
        generate_series(1, 1) g
      returning 
        employees.id
  ),
  first_level as (
    insert into 
      employees(manager_id, name)
        select 
          root.id, 
          'first_level' || md5(random()::text) 
        from 
          generate_series(1, 2) g, 
          root
        returning 
          employees.id
  ),
  second_level as (
    insert into 
      employees(manager_id, name)
        select 
          first_level.id, 
          'second_level' || md5(random()::text) 
        from 
          generate_series(1, 2) g, 
          first_level
        returning 
          employees.id
  )
insert into 
  employees(manager_id, name)
select 
  second_level.id, 
  'third_level' || md5(random()::text) 
from 
  generate_series(1, 2) g, 
  second_level;

Alright, let's have a look. Here's the query for the descendants generated by Hibernate.

postgres=# select count(*) from employees;
 count 
-------
 15
(1 row)

Hmm - looks a bit more complicated than expected! Let's see if we can simplify it a bit, keeping in mind the picture I showed you earlier about the base step and the recursive step linked with the base step. We shouldn't need to do more than that. See what you think of the following.

create temporary sequence employees_id_seq;
insert into employees (id, manager_id, name)
with recursive t(id, parent_id, level, name) AS
(
  select 
    nextval('employees_id_seq')::bigint,
    null::bigint, 
    1, 
    'root' from generate_series(1,1) g

    union all

    select 
      nextval('employees_id_seq')::bigint, 
      t.id, 
      level+1, 
      'level' || level || '-' || md5(random()::text) 
    from 
      t, 
      generate_series(1,2) g
    where 
      level 



<p>Much better! We removed some unnecessary joins. This is expected to make the query go faster because it will have less work to do. </p>

<h4>
  
  
  Final result
</h4>

<p>As a final step let's clean up the query and replace the table names that Hibernate adds with ones that are more human readable.<br>
</p>

<pre class="brush:php;toolbar:false">postgres=# select count(*) from employees;
 count 
-------
 15
(1 row)

Alright, time to see how we go "up" the tree.

Ancestors

All managers up the chain

Let's first try to write down the conceptual steps for getting the managers of employee with ID = 14.

Hierarchical data with PostgreSQL and Spring Data JPA

Looks very much like the one for the descendants, just the connection between the base step and the recursive step is the other way.

We can write the JPQL query looks like this:

@Entity
@Table(name = "employees")
@Getter
@Setter
public class Employee {
    @Id
    private Long id;

    private String name;

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "manager_id")
    private Employee manager;

    @OneToMany(
            mappedBy = "parent",
            cascade = CascadeType.ALL,
            orphanRemoval = true
    )
    private List<employee> employees = new ArrayList();
}
</employee>

And that's it! I have looked at the SQL query generated but I could not find any extra commands that I could shave off. Time to move on to approach 2.

2. Materialized paths

ltree is a Postgres extension we can use to work with hierarchical tree structures as materialized paths (starting from the top of the tree). For example, this is how we will record the path for leaf node 8: 1.2.4.8. There are several useful functions it comes with. We can use it as a table column:

return entityManager.createQuery("""
 with employeeRoot as (
  select
    employee.employees employee
  from
    Employee employee
  where
    employee.id = :employeeId

  union all

  select
    employee.employees employee
  from
    Employee employee
  join
    employeeRoot root ON employee = root.employee
  order by
    employee.id
  )
  select 
    new Employee(
     root.employee.id
   )
  from 
  employeeRoot root
 """, Employee.class
)
 .setParameter("employeeId", employeeId)
 .getResultList();

In order to populate the above table with test data, the approach I took is basically migrate the generated data from the table used for the adjacency list you saw before, using the following SQL command. It's again a recursive query which collects elements into an accumulator at every step.

public class ClassImportIntegratorProvider implements IntegratorProvider {
    @Override
    public List<integrator> getIntegrators() {
        return List.of(
                new ClassImportIntegrator(
                        singletonList(
                                Employee.class
                        )
                )
        );
    }
}
</integrator>

Here's the entries that the above command generated.

@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {

    private static final String POSTGRES = "postgres";

    @Bean
    @ServiceConnection
    PostgreSQLContainer> postgresContainer() {
        return new PostgreSQLContainer(DockerImageName.parse("postgres:latest"))
                .withUsername(POSTGRES)
                .withPassword(POSTGRES)
                .withDatabaseName(POSTGRES)
                .withInitScript("init-script.sql");
    }
}

We can proceed to write the Hibernate entity. In order to map columns of type ltree, I implemented a UserType. I can then map the path field with @Type(LTreeType.class):

create table employees
(
    id           bigserial primary key,
    manager_id   bigint references employees
    name         text,
);

We're ready to write some queries. In native SQL, it would look like the following:

with root as (
  insert into 
    employees(manager_id, name)
      select 
        null, 
        'root' || md5(random()::text) 
      from  
        generate_series(1, 1) g
      returning 
        employees.id
  ),
  first_level as (
    insert into 
      employees(manager_id, name)
        select 
          root.id, 
          'first_level' || md5(random()::text) 
        from 
          generate_series(1, 2) g, 
          root
        returning 
          employees.id
  ),
  second_level as (
    insert into 
      employees(manager_id, name)
        select 
          first_level.id, 
          'second_level' || md5(random()::text) 
        from 
          generate_series(1, 2) g, 
          first_level
        returning 
          employees.id
  )
insert into 
  employees(manager_id, name)
select 
  second_level.id, 
  'third_level' || md5(random()::text) 
from 
  generate_series(1, 2) g, 
  second_level;

But let's write our queries in JPQL. For this, we'll have to first write our custom StandardSQLFunction. This will allow us to define a substitution for the Postgres native operator.

postgres=# select count(*) from employees;
 count 
-------
 15
(1 row)

We then have to register it as a FunctionContributor, like so:

create temporary sequence employees_id_seq;
insert into employees (id, manager_id, name)
with recursive t(id, parent_id, level, name) AS
(
  select 
    nextval('employees_id_seq')::bigint,
    null::bigint, 
    1, 
    'root' from generate_series(1,1) g

    union all

    select 
      nextval('employees_id_seq')::bigint, 
      t.id, 
      level+1, 
      'level' || level || '-' || md5(random()::text) 
    from 
      t, 
      generate_series(1,2) g
    where 
      level 



<p>The last step is to create a resource file in the META-INF/services folder called org.hibernate.boot.model.FunctionContributor where we will add a single line with the fully qualified name of the class above.</p>

<p>Okay, cool! We're finally in position to write the following query:<br>
</p>

<pre class="brush:php;toolbar:false">postgres=# select count(*) from employees;
 count 
-------
 15
(1 row)

For example, we can call this method like this to retrieve all the paths that contain ID = 2:

@Entity
@Table(name = "employees")
@Getter
@Setter
public class Employee {
    @Id
    private Long id;

    private String name;

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "manager_id")
    private Employee manager;

    @OneToMany(
            mappedBy = "parent",
            cascade = CascadeType.ALL,
            orphanRemoval = true
    )
    private List<employee> employees = new ArrayList();
}
</employee>

Postgres offers a wide set of functions for working with ltrees. You can find them in the official docs page. As well, there's a useful cheatsheet.

It's important to add constraints to our schema in order to ensure data consistency - here's a good resource I found on this topic.

3. Nested sets

Easiest to understand is with an image showing the intuition. At every node of the tree we have an extra "left" and a "right" column besides its ID. The rule is that all the children have their left and right in between their parent's left and right values.

Hierarchical data with PostgreSQL and Spring Data JPA

Here's the table structure to represent the tree above.

return entityManager.createQuery("""
 with employeeRoot as (
  select
    employee.employees employee
  from
    Employee employee
  where
    employee.id = :employeeId

  union all

  select
    employee.employees employee
  from
    Employee employee
  join
    employeeRoot root ON employee = root.employee
  order by
    employee.id
  )
  select 
    new Employee(
     root.employee.id
   )
  from 
  employeeRoot root
 """, Employee.class
)
 .setParameter("employeeId", employeeId)
 .getResultList();

In order to populate the table, I have converted the script from Joe Celko's "SQL for smarties" book into Postgres syntax. Here it is:

public class ClassImportIntegratorProvider implements IntegratorProvider {
    @Override
    public List<integrator> getIntegrators() {
        return List.of(
                new ClassImportIntegrator(
                        singletonList(
                                Employee.class
                        )
                )
        );
    }
}
</integrator>

Alright, I'm ready to do some queries. Here's how to retrieve the ancestors.

@DynamicPropertySource
static void registerPgProperties(DynamicPropertyRegistry registry) {
    registry.add("spring.jpa.show_sql", () -> true);
}

For the descendants, we'd first have to retrieve the left and right, after which we can use the below query.

with recursive employeeRoot (employee_id) as 
(
select 
  e1_0.id
from 
  employees eal1_0
join 
  employees e1_0 on eal1_0.id = e1_0.manager_id
where eal1_0.id=?

union all

(
select 
  e2_0.id
from 
  employees eal2_0
join 
  employeeRoot root1_0 on eal2_0.id = root1_0.employee_id
join 
  employees e2_0 on eal2_0.id = e2_0.manager_id
order by 
  eal2_0.id
)
)
select 
  root2_0.employee_id
from 
  employeeRoot root2_0

And that's it! You've seen how to go up or down the tree for all three approaches. I hope that you enjoyed the journey and you find it useful.

Postgres vs. document/graph databases

The database we've used for the examples above is PostgreSQL. It is not the only option, for example you might wonder why not choose a document database like MongoDB, or a graph databases like Neo4j, because they were actually built with this type of workload in mind.

Chances are, you already have your source of truth data in Postgres in a relational model leveraging transactional guarantees. In that case, you should first check how well Postgres itself handles your auxiliary use-cases as well, in order to keep everything in one place. This way, you will avoid the increased cost and operational complexity needed to spin up and maintain/upgrade a new separate specialised data store, as well as needing to get familiar with it.

Conclusion

There are several interesting options for modelling hierarchical data in your database applications. In this post I've shown you three ways to do it. Stay tuned for Part 2 where we will compare them as well as see what happens with larger volume of data.

References

https://dev.to/yugabyte/learn-how-to-write-sql-recursive-cte-in-5-steps-3n88
https://vladmihalcea.com/hibernate-with-recursive-query/
https://vladmihalcea.com/dto-projection-jpa-query/
https://tudborg.com/posts/2022-02-04-postgres-hierarchical-data-with-ltree/
https://aregall.tech/hibernate-6-custom-functions#heading-implementing-a-custom-function
https://www.amazon.co.uk/Joe-Celkos-SQL-Smarties-Programming/dp/0128007613
https://madecurious.com/curiosities/trees-in-postgresql/
https://schinckel.net/2014/11/27/postgres-tree-shootout-part-2:-adjacency-list-using-ctes/

The above is the detailed content of Hierarchical data with PostgreSQL and Spring Data JPA. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How do I implement multi-level caching in Java applications using libraries like Caffeine or Guava Cache?How do I implement multi-level caching in Java applications using libraries like Caffeine or Guava Cache?Mar 17, 2025 pm 05:44 PM

The article discusses implementing multi-level caching in Java using Caffeine and Guava Cache to enhance application performance. It covers setup, integration, and performance benefits, along with configuration and eviction policy management best pra

How does Java's classloading mechanism work, including different classloaders and their delegation models?How does Java's classloading mechanism work, including different classloaders and their delegation models?Mar 17, 2025 pm 05:35 PM

Java's classloading involves loading, linking, and initializing classes using a hierarchical system with Bootstrap, Extension, and Application classloaders. The parent delegation model ensures core classes are loaded first, affecting custom class loa

How can I implement functional programming techniques in Java?How can I implement functional programming techniques in Java?Mar 11, 2025 pm 05:51 PM

This article explores integrating functional programming into Java using lambda expressions, Streams API, method references, and Optional. It highlights benefits like improved code readability and maintainability through conciseness and immutability

How can I use JPA (Java Persistence API) for object-relational mapping with advanced features like caching and lazy loading?How can I use JPA (Java Persistence API) for object-relational mapping with advanced features like caching and lazy loading?Mar 17, 2025 pm 05:43 PM

The article discusses using JPA for object-relational mapping with advanced features like caching and lazy loading. It covers setup, entity mapping, and best practices for optimizing performance while highlighting potential pitfalls.[159 characters]

How do I use Maven or Gradle for advanced Java project management, build automation, and dependency resolution?How do I use Maven or Gradle for advanced Java project management, build automation, and dependency resolution?Mar 17, 2025 pm 05:46 PM

The article discusses using Maven and Gradle for Java project management, build automation, and dependency resolution, comparing their approaches and optimization strategies.

How do I use Java's NIO (New Input/Output) API for non-blocking I/O?How do I use Java's NIO (New Input/Output) API for non-blocking I/O?Mar 11, 2025 pm 05:51 PM

This article explains Java's NIO API for non-blocking I/O, using Selectors and Channels to handle multiple connections efficiently with a single thread. It details the process, benefits (scalability, performance), and potential pitfalls (complexity,

How do I create and use custom Java libraries (JAR files) with proper versioning and dependency management?How do I create and use custom Java libraries (JAR files) with proper versioning and dependency management?Mar 17, 2025 pm 05:45 PM

The article discusses creating and using custom Java libraries (JAR files) with proper versioning and dependency management, using tools like Maven and Gradle.

How do I use Java's sockets API for network communication?How do I use Java's sockets API for network communication?Mar 11, 2025 pm 05:53 PM

This article details Java's socket API for network communication, covering client-server setup, data handling, and crucial considerations like resource management, error handling, and security. It also explores performance optimization techniques, i

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools