22

We are supporting several microservices written in Java using Spring Boot and deployed in OpenShift. Some microservices communicate with databases. We often run a single microservice in multiple pods in a single deployment. When each microservice starts, it starts liquibase, which tries to update the database. The problem is that sometimes one pod fails while waiting for the changelog lock. When this happens in our production OpenShift cluster, we expect other pods to fail while restarting because of the same problem with changelog lock issue. So, in the worst case scenario, all pods will wait for the lock to be lifted.

We want liquidbase to automatically prepare our database schemas when each pod is starting.

Is it good to store this logic in every microservice? How can we automatically solve the problem when the liquidbase changelog lock problem appears? Do we need to put the database preparation logic in a separate deployment?

So maybe I should paraphrase my question. What is the best way to run db migration in term of microservice architecture? Maybe we should not use db migration in each pod? Maybe it is better to do it with separate deployment or do it with some extra Jenkins job not in OpenShift at all?

6
  • How do you know that the liquibase has successfully updated the db? Apr 23, 2020 at 12:55
  • @ChrisBolton After launching the pod with the new version of application where we changed something in database structure via liquidbase scripts we just look into postgres and compare what is written in liquibase script and what we have in postgres. This is fully manual procedure.
    – Alex Crazy
    Apr 23, 2020 at 13:08
  • Automate the process. Apr 23, 2020 at 13:17
  • @ChrisBolton, we really don't know how to
    – Alex Crazy
    Apr 23, 2020 at 13:40
  • Hi, have you found a solution yet? We kind of run into the same problem and it is really annoying...
    – andi17
    Oct 29, 2020 at 11:12

5 Answers 5

21

We're running liquibase migrations as an init-container in Kubernetes. The problem with running Liquibase in micro-services is that Kubernetes will terminate the pod if the readiness probe is not successful before the configured timeout. In our case this happened sometimes during large DB migrations, which could take a few minutes to complete. Kubernetes will terminate the pod, leaving DATABASECHANGELOGLOCK in a locked state. With init-containers you will not have this problem. See https://www.liquibase.org/blog/using-liquibase-in-kubernetes for a detailed explanation.

UPDATE Please take a look at this Liquibase extension, which replaces the StandardLockService, by using database locks: https://github.com/blagerweij/liquibase-sessionlock

This extension uses MySQL or Postgres user lock statements, which are automatically released when the database connection is closed (e.g. when the container is stopped unexpectedly). The only thing required to use the extension is to add a dependency to the library. Liquibase will automatically detect the improved LockService.

I'm not the author of the library, but I stumbled upon the library when I was searching for a solution. I helped the author by releasing the library to Maven central. Currently supports MySQL and PostgreSQL, but should be fairly easy to support other RDBMS.

5
  • This looks really useful! Any chance of this getting incorporated by Liquibase itself?
    – Dirk Luijk
    Jan 5, 2022 at 10:59
  • @DirkLuijk great question. I've just added the extension on the Liquibase available extensions list: liquibase.jira.com/wiki/spaces/CONTRIB/pages/1998865/…
    – blagerweij
    Jan 5, 2022 at 12:48
  • Also supports Oracle and MariaDB
    – loic
    Feb 3, 2022 at 11:18
  • @blagerweij in the repo's docs you state that the package com.github.blagerweij.sessionlock must be added to Liquibase classpath scanner's whitelist. How can I do this programmatically with Spring Boot and the revised Liquibase 4+ concept of Scope where they have refactored the ServiceLocator in a way that it no longer has the #addPackageToScan() method? Mar 23, 2022 at 18:18
  • 1
    @DanieleRepici Since the jar file contains a services file in META-INF, that should work out of the box, without any additional configuration. Just add a dependency to your gradle or maven build file. Please let me know if that works.
    – blagerweij
    Mar 24, 2022 at 21:03
11

When Liquibase kicks in during the spring-boot app deployment, it performs (on a very high level) the following steps:

  1. lock the database (create a record in databasechangeloglock)
  2. execute changeLogs;
  3. remove database lock;

So if you interrupt application deployment while Liquibase is between steps 1 and 3, then your database will remain locked. So when you'll try to redeploy your app, Liquibase will fail, because it will treat your database as locked.

So you have to unlock the database before deploying the app again.

There are two options that I'm aware of:

  1. Clear databasechangeloglock table or set locked to false. Which is DELETE FROM databasechangeloglock or UPDATE databasechangeloglock SET locked=0
  2. Execute liquibase releaseLocks command. You can find documentation about it here and here.
8
  • yeah, thanks, you have described problem right. But the question is can I release lock in logic of my application and each pod in my case will try to release this lock when starts? I think this will cause concurrent problem. Am I right? How and where release this lock automatically and is it possible to do it in my SpringBoot application code? Please pay attention to the fact application is running in many pods in one deployment in OpenShift, so there are several instances of my app running at the same time
    – Alex Crazy
    Apr 23, 2020 at 13:45
  • We do it manually now, by hands, but how and where to write code for this? We want this problem to be solve automatically
    – Alex Crazy
    Apr 23, 2020 at 13:50
  • Can you execute liquibase releaseLocks command in your deployment pipeline? You're right, Liquibase locks the database to solve the problem of concurrency. Perhaps you can override some Liquibase class and add "unlocking logic" to it, so it executes before Liquibase checks for the log, or something like that.
    – htshame
    Apr 23, 2020 at 14:20
  • So maybe I should paraphrase my question. What is the best way to run db migration in term of microservice architecture? Maybe we should not use db migration in each pod? Maybe it is better to do it with separate deployment or do it with some extra Jenkins job not in OpenShift at all?
    – Alex Crazy
    Apr 23, 2020 at 17:28
  • Oh, I see. In my experience, if multiple microservices connect to the same database, then it's reasonable to create a deployment order, so microservices deploy one by one. Also, microservice architecture at it's finest should have each microservice connect to it's own database. Otherwise it looks more like distributed monolith.
    – htshame
    Apr 23, 2020 at 18:05
4

We managed to solve this in my company by following also the same approach Liquibase suggests with Init Containers, but instead of using a new container and run the Liquibase migration via Liquibase CLI, we are reusing the existing Spring Boot service setup but just executing the Liquibase logic. We have created an alternative main class that can be used in an entrypoint to populate the database using Liquibase.

The InitContainerApplication class brings the minimal configuration required to start the application and set up Liquibase.

Typical usage:

entrypoint: "java -cp /app/extras/*:/app/WEB-INF/classes:/app/WEB-INF/lib/* com.backbase.buildingblocks.auxiliaryconfig.InitContainerApplication"

Here the class

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.SpringBootConfiguration;
import org.springframework.boot.autoconfigure.ImportAutoConfiguration;
import org.springframework.context.ApplicationContext;

@SpringBootConfiguration
@ImportAutoConfiguration(InitContainerAutoConfigurationSelector.class)
public class InitContainerApplication implements ApplicationRunner {

    @Autowired
    private ApplicationContext appContext;

    public static void main(String[] args) {
        SpringApplication.run(InitContainerApplication.class, args);
    }

    @Override
    public void run(ApplicationArguments args) throws Exception {
        SpringApplication.exit(appContext, () -> 0);
    }

}

Here is the use as an Init Container:

spec:
  initContainers:
    - name: init-liquibase
      command: ['java']
      args: ['-cp', '/app/extras/*:/app/WEB-INF/classes:/app/WEB-INF/lib/*',
                 'com.backbase.buildingblocks.auxiliaryconfig.InitContainerApplication']
2
  • How does this prevent the database from being left in a locked state ? init-container or not, if the pods gets whipped (when a new deployment is applied for example) before liquibase releases the locked who is going to unlock it? Am I missing something with init-container ?
    – Alexis
    Feb 25, 2022 at 8:32
  • This is just another approach to the Liquibase proposal to avoid long term migrations being killed by k8s readiness/liveness probes (liquibase.org/blog/using-liquibase-in-kubernetes for a detailed explanation) but instead of using a new container with the Liquibase CLI we reuse the same container with our service but using a different entrypoint. If you kill the pod during a migration you get the locked status and you need to solve that manually, unless you change the LockService, like in the example shown in previous answers: github.com/blagerweij/liquibase-sessionlock
    – Torres
    Mar 1, 2022 at 8:12
0

Finally we solved this problem in another project by removing liquibase migration at microservice start time. Now separate Jenkins job apply the migration and separate Jenkins job deploy and start microservice after migration apply. So now microservice itself doesn’t apply database update

-1

I encountered this issue when one of the Java applications I manage abruptly shut down.

The logs were displaying the error below when the application tries to start:

waiting to acquire changelock

Here's how I solved it

I fixed this issue by:

  • Stopping the application
  • Deleting the databasechangelog and databasechangelog.lock files in the database connected to the application.
  • Restarting the application

In my case the application was connected to 2 databases. I had to delete the databasechangelog and databasechangelog.lock files in the both databases and then restarted the application. The both database databasechangelog and databasechangelog.lock files have to be at sync.

After this the application was able to acquire changelock file.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.