NullifyNetwork

The blog and home page of Simon Soanes
Skip to content
[ Log On ]

During my switch to Postgres, I also deployed an active/passive physical replication cluster, at the time on Postgres 16.  I wanted to upgrade to Postgres 17, so did a backup:-

    pg_dumpall > 20241209-PreUpgradeBackup.bak

Created a new Postgres 17 server, and restored the dump there, to allow me to check and test that PG17 works with everything:-

    /usr/local/pgsql/bin/psql -d postgres -f ~/backups/20241209-PreUpgradeBackup.bak

But this would require a data loss period in production which could be unacceptable, and when upgrading postgres, it is also important to rebuild your replicas from the new instance.  This provides the opportunity to use it as a means to do a seamless, zero downtime upgrade - and it's also a good way to upgrade each server separately.  I also considered this as an opportunity to test my failover processes so I downed my master server from the pool [1] (so I can bring it back up if there's a problem with the upgrade) and ran this on a secondary server:-

    SELECT pg_promote(); 

Which makes the instance writeable, instantly failing over (using the connection string approach discused in my previous blog post on migrating to Postgres, the applications will now switch to writing to this location) - and checked everything runs smoothly for DR purposes, installed Postgres 17 on a new instance, configured the pg_hba.conf file to allow replication from the new-master, and ran:-

    pg_basebackup -h yourmasterservername.yourdomain.com -U replica_user -X stream -C -S yourreplicationslotname -v -R -W -D /var/lib/postgresql/17/data/
Remember to run this as your postgres user, or you will need to chown -R postgres /var/lib/postgresql/17/data afterwards.
 
This will recreate the entire data directory and populate it with a clone of the master server, setting the server to a restoring state so that it's a valid secondary.  If you want to have your replicas support reading, then set the hot_standby setting to on in the postgresql.conf file.  You can also then wipe your original master, and re-add it as a secondary if you care about re-using the machine/VM.
 
[1] - The pooling I'm using is a custom written utility that monitors and updates my DNS automatically when there's a change in state on the servers, if you are in Azure there's a Virtual Endpoints feature that does this, or you can use a Postgres proxy, there's a few of them around.  If you are in an enterprise situation then your load balancers can probably handle this.

Permalink