I updated PlaylistShare on this server recently, and I encountered an issue. The backend resulted in error 500 when retrieving the albums. The last update worked well, so I didn’t initially investigate that angle. I was wrong, but we’ll get to this later.
It runs well on my local server, and I haven’t encountered this issue in all my tests, so I didn’t think it’s in my code. I checked it, of course, and I didn’t see anything weird.
I started to check if the Docker Compose works well. I reviewed the build log, but didn’t find anything relevant at first glance. In reality, there was something, but I overlooked it.
Next, I checked the Docker output using the command
docker attach <container_name> and I received the complete error message, which informed me that one of my new tables has an error.
I execute a shell within the Docker container using the command
docker exec -it <container_name> sh, and then proceed to investigate what might be wrong.
After performing some initial checks (such as confirming the proper application of the update), I examined the database directory and the migrations directory. It was at that point that I realized my mistake.
What I didn’t notice in the Docker build log is that the final database migrations I had written didn’t execute.
When I configure my Docker setup for PlaylistShare, I include the entire
app/database as a volume.
Seems like a good idea. The
database.sqlite file is in this directory by default. All my recent updates have worked well because I haven’t altered the database schema; they were mostly frontend updates.
But when I updated the database schema… Well, the migration files were not taken into account because they are located in the directory
/app/database/migrations, which is part of the volume
app/database. Therefore, this directory is not updated with the containers. My migration files don’t get into the container, and the non-updated database triggers the 500 error that I am encountering.
I want to quickly address this in production, so I simply copy/paste the migration from Git to the Docker volume in the same shell as above, and then manually initiate the migration.
This does the trick, but it’s a short-term solution. I don’t intend to manually update my database with each migration; it should be an automated process.
To definitively fix the problem, I have a list of tasks that I must complete:
- Change the database path: without a volume, any updates will erase the database. Therefore, I can no longer use the default folder.
- Update the volume configuration to eliminate the use of
- Ensure that the production database remains unaffected during the update process to prevent any loss of data.
The first point and the second point are the easiest. Add an environment variable in the backend Dockerfile for the first one:
Mapping this path to the volume in docker-compose.yml for the second one:
volumes section is used for naming the volume. I hadn’t named my volume until now, so the volume’s name was excessively long and didn’t convey any meaning. It’s quite frustrating when you need to work with such a volume.
For the third and final point, it’s a bit more complex. I can’t achieve it by simply altering a few configuration files.
After reading some posts on StackOverflow, the Docker documentation and asking ChatGPT (ok, I didn’t really ask ChatGPT), I’ve identified two distinct approaches to address this problem:
- Manually copy the contents of the volume directly to the system file. Volumes are located at /var/lib/docker/volumes/ on Debian (likely the same for other Linux OS, but I haven’t verified this).
- Backup and restore the volume using Docker (refer to the documentation)
I chose the first option for various reasons, with the main one being that this solution is simpler and quicker. Additionally, I only need to perform this task once.
As for the second option, I will examine it. This is because I intend for that instance to become public at some point in the future. To achieve this, an automatic database reset is necessary, although that is a separate topic.
The steps I followed are as follows:
- First, backup the database of the former volume:
- Then, update the containers with the latest version and the appropriate volume:
git fetch && git pull
- Replace the database in the new volume with the manual backup we performed in step 1:
cp ~/backup/database.sqlite /var/lib/docker/volumes/new_volume/_data/database.sqlite
- Finally, we need to perform the migration manually, utilizing a Docker shell:
docker exec -it <container_name> sh
And that’s it! It works for me. It was a quick and dirty solution, but as I mentioned earlier, it’s a one-time-only operation.
You should perform some cleanup on the volume. I removed the older volume using the following commands:
docker volume ls
Not much to say, I just want to share my debugging process here; it is quite straightforward.
Perhaps some individuals will learn something by reading this—maybe someone made the same mistake as me and can see how they might rectify it. In any case, I hope this post will prove useful, or at the very least, interesting.
Take care, folks.