Performing a Postgres major version rolling upgrade on a PGD cluster v6.1.0
Upgrading Postgres major versions
Upgrading a Postgres database's major version to access improved features, performance enhancements, and security updates is a common administration task. Doing the same for an EDB Postgres Distributed (PGD) cluster is essentially the same process but performed as a rolling upgrade.
The rolling upgrade process allows updating individual cluster nodes to a new major Postgres version while maintaining cluster availability and operational continuity. This approach minimizes downtime and ensures data integrity by allowing the rest of the cluster to remain operational as each node is upgraded sequentially.
The following overview of the general instructions and worked examples help to provide a smooth and controlled upgrade process.
Prepare the upgrade
To prepare for the upgrade, identify the subgroups and nodes you're trying to upgrade and note an initial upgrade order.
To do this, connect to one of the nodes using SSH and run the pgd nodes list
command:
sudo -u postgres pgd nodes list
The pgd nodes list
command shows you all the nodes in your PGD cluster and the subgroup to which each node belongs.
Then you want to find out which node is the write leader in each subgroup:
sudo -u postgres pgd group <group_name> show --summary
This command shows you information about the pgd group tokened by your <group_name>
running in your cluster, including which node is the write leader.
To maintain operational continuity, you need to switch write leaders over to another node in their subgroup before you can upgrade them.
To keep the number of planned switchovers to a minimum, when upgrading a subgroup of nodes, upgrade the writer leaders last.
To make sure the node being upgraded does not become a write leader until the upgrade is complete, you should fence the node before initiating the upgrade and then unfence the node after the node upgrade is completed.
Even though you verified which node is the current write leader for planning purposes, the write leader of a subgroup could change to another node at any moment for operational reasons before you upgrade that node. Therefore, you still need to verify that a node isn't the write leader just before upgrading that node.
You now have enough information to determine your upgrade order, one subgroup at a time, aiming to upgrade the identified write leader node last in each subgroup.
Perform the upgrade on each node
Important
To help prevent data loss, before starting the upgrade process, ensure that your databases and configuration files are backed up.
Using the preliminary order, perform the following steps on each node while connected via SSH:
Confirm the current Postgres version
View versions from PGD:
sudo -u postgres pgd nodes list --versions
.Ensure that the expected major version is running.
Verify that the target node isn't the write leader
Check whether the target node is the write leader for the group you're upgrading:
sudo -u postgres pgd group <group_name> show --summary
If the target node is the current write leader for the group/subgroup you're upgrading, perform a planned switchover to another node:
sudo -u postgres pgd group <group_name> set-leader <new_leader_node_name>
Fence the node
- To make sure the node being upgraded does not become a write leader until the upgrade is complete, you should fence the node before initiating the upgrade.
Stop Postgres on the target node
Stop the Postgres service on the current node:
sudo systemctl stop postgres
The target node is no longer actively participating as a node in the cluster.
Install PGD and utilities
Install PGD and its utilities compatible with the Postgres version you're upgrading to:
sudo apt install edb-bdr6-pg<new_postgres_version_number>
Initialize the new Postgres instance
Create a directory to house the database files for the new version of PostgreSQL:
sudo mkdir -p /opt/postgres/datanew
Ensure that the user postgres has ownership permissions to the directory using
chown
.Initialize a new PostgreSQL database cluster in the directory you just created. This step involves using the
initdb
command provided by the newly installed version of PostgreSQL. Include the--data-checksums
flag to ensure the cluster uses data checksums.sudo -u postgres <path_to_postgres_bin>/initdb -D /opt/postgres/datanew --data-checksums
Replace
<path_to_postgres_bin>
with the path to the bin directory of the newly installed PostgreSQL version.You may need to run this command as the postgres user or another user with appropriate permissions.
Migrate configuration to the new Postgres version
- Locate the following configuration files in your current PostgreSQL data directory:
postgresql.conf
— The main configuration file containing settings related to the database system.postgresql.auto.conf
— Contains settings set by PostgreSQL, such as those modified by theALTER SYSTEM
command.pg_hba.conf
— Manages client authentication, specifying which users can connect to which databases from which hosts.- The entire
conf.d
directory (if present) — Allows for organizing configuration settings into separate files for better manageability.
- Copy these files and the
conf.d
directory to the new data directory you created for the upgraded version of PostgreSQL.
- Locate the following configuration files in your current PostgreSQL data directory:
Verify the Postgres service is inactive
Before proceeding, it's important to ensure that no PostgreSQL processes are active for both the old and the new data directories. This verification step prevents any data corruption or conflicts during the upgrade process.
Use the
sudo systemctl status postgres
command to verify that Postgres was stopped. If it isn't stopped, runsystemctl stop postgres
and verify again that it was stopped.
Swap PGDATA directories for version upgrade
- Rename
/opt/postgres/data
to/opt/postgres/dataold
and/opt/postgres/datanew
to/opt/postgres/data
.
This step readies your system for the next crucial phase: running pgd node upgrade to finalize the PostgreSQL version transition.
- Rename
Verify upgrade feasibility
The
pgd node upgrade
tool offers a--check
option designed to perform a preliminary scan of your current setup, identifying any potential issues that could hinder the upgrade process.You need to run this check from an upgrade directory with ownership given to user postgres, such as
/home/upgrade/
, so that the upgrade log files created bypgd node upgrade
can be stored. To initiate the safety check, append the--check
option to yourpgd node upgrade
command.This operation simulates the upgrade process without making any changes, providing insights into any compatibility issues, deprecated features, or configuration adjustments required for a successful upgrade.
Address any warnings or errors indicated by this check to ensure an uneventful transition to the new version.
- Execute the Postgres major version upgrade
- Execute the upgrade process by running the
pgd node <node_name> upgrade
command without the--check
option. - It's essential to monitor the command output for any errors or warnings that require attention.
- The time the upgrade process take depends on the size of your database and the complexity of your setup.
- Execute the upgrade process by running the
Update the Postgres service configuration
Update the service configuration to reflect the new PostgreSQL version by updating the version number in the
postgres.service
file:sudo sed -i -e 's/<old_version_number>/<new_version_number>/g' /etc/systemd/system/postgres.service
Refresh the system's service manager to apply these changes:
sudo systemctl daemon-reload
Restart Postgres
Proceed to restart the PostgreSQL service:
systemctl start postgres
Validate the new Postgres version
Verify that your PostgreSQL instance is now upgraded:
sudo -u postgres pgd nodes list --versions
Unfence the node
- You can unfence the node after validating the upgrade.
- Clean up post-upgrade
- Run
vacuumdb
with theANALYZE
option immediately after the upgrade but before introducing a heavy production load. Running this command minimizes the immediate performance impact, preparing the database for more accurate testing. - Remove the old version's data directory,
/opt/postgres/dataold
.
- Run
Worked example: Upgrade PGD 4 to PGD 6.1
This worked example describes an in-place major version rolling upgrade from PGD 4 to PGD 6.1.
Overview
A PGD 4 cluster using HARP Proxy based routing continues this routing method to all nodes until the entire cluster is upgraded to 6.1.0 or higher. HARP-proxy based routing functions the same within a mixed version cluster. HARP uses its own mechanism to elect a leader since a 4.x cluster does not have a write leader. Once the entire cluster is upgraded to version 6.1.0 or higher, Connection Manager is already enabled and ready. Applications then can be pointed to the Connection Manager port/node and it will start routing. Once confirmed to be working, proxy can be stopped.
Note
The worked example assumes Harp proxy is not co-located with a PGD node, as recommended in the PGD architecture. If you have a Harp proxy co-located with a PGD node, contact EDB Support for upgrade instructions.
Confirm the harp-proxy leader
Start the upgrade on a node that isn't the harp-proxy leader. Confirm which node is the harp-proxy leader:
test-pgd6major-d1:~ $ harpctl get leader a
Cluster Name Location Ready Fenced Allow Routing Routing Status Role Type Lock Duration ------- ---- -------- ----- ------ ------------- -------------- ---- ---- ------------- bdrgroup test-pgd6major-d2 a true false true primary bdr 6
Fence the node
Fence off the node to be upgraded from HARP and then verify it was fenced, so it does not become the leader during the middle of upgrade:
test-pgd6major-d1:~ $ harpctl fence test-pgd6major-d1
INFO cmd/fence.go:42 fence node test-pgd6major-d1
test-pgd6major-d1:~ $ harpctl get nodes Cluster Name Location Ready Fenced Allow Routing Routing Status Role Type Lock Duration ------- ---- -------- ----- ------ ------------- -------------- ---- ---- ------------- bdrgroup test-pgd6major-d1 a false true true N/A primary bdr 6 bdrgroup test-pgd6major-d2 a true false true ok primary bdr 6
Stop the Postgres service
On the fenced node, stop the Postgres service.
Stop HARP manager
On the fenced node, stop HARP manager.
Remove and install packages
On the fenced node, remove PGD 4.4 and the cli packages and install the PGD 6.1 packages.
Start the Postgres service
On the fenced node, start Postgres service. This performs an in-place upgrade of the PGD local node to PGD 6.1 with Connection Manager enabled.
Start HARP manager
On the fenced node, start HARP manager.
Unfence the node
Unfence the upgraded node from HARP:
test-pgd6major-d1:~ $ harpctl unfence test-pgd6major-d1
Repeat steps for all nodes
Repeat same steps on all other nodes.
Confirm cluster version
Confirm the updated cluster version is version 6001 by running bdr.group_raft_details
.
Confirm SCRAM hashes
From any of the upgrades nodes, run the following query to ensure that SCRAM hashes are the same across all nodes for each user. This is required before applications switch to Connection Manager.
DO $$ DECLARE rec RECORD; command TEXT; BEGIN FOR rec IN SELECT rolname,rolpassword FROM pg_authid WHERE rolcanlogin = true AND rolpassword like 'SCRAM-SHA%' LOOP command := 'ALTER ROLE ' || quote_ident(rec.rolname) || ' WITH ENCRYPTED PASSWORD ' || ''' || rec.rolpassword || '''; EXECUTE command; END LOOP; END; $$; SELECT wait_slot_confirm_lsn(NULL,NULL);
Enable routing
Enable node group routing as per your global or local routing requirement. For local routing enable it on subgroups, for global routing enable it on the top group.
bdrdb=# SELECT bdr.alter_node_group_option(node_group_name := 'bdrgroup',config_key := 'enable_routing', config_value := true::TEXT);
Output:
alter_node_group_option ------------------------- (1 row)
Switch to Connection Manager
It should now be safe to switch your application to Connection Manager.
Stop harp manager and proxy services.
It should now be safe to stop any running harp manager and proxy services.
Note
This completes the worked example for an in-place major version rolling upgrade from PGD 4 to PGD 6.1.
Worked example: Upgrade PGD 5 to PGD 6.1
This worked example describes an in-place major version rolling upgrade (PGD & Postgres) of a 3-node PGD 5, EPAS 13 cluster to PGD 6, EPAS 17 cluster using the pgd node upgrade
command.
Prerequisites
Ensure you have a 3-node PGD 5, EPAS 13 cluster up and running. This is a TPA-deployed cluster.
pgd-a1:/home/rocky $ pgd nodes list
Output:
Node Name Group Name Node Kind Join State Node Status ---------- ---------- --------- ---------- ----------- pgd-a1 group-a data ACTIVE Up pgd-a2 group-a data ACTIVE Up witness-a1 group-a witness ACTIVE Up
Install packages for the new server and PGD
Ensure that the packages for EPAS 17 and the corresponding PGD 6 packages are installed on all nodes in the cluster. To prevent binary conflicts, you must remove the PGD 5 packages, viz., edb-pgd5-cli and edb-bdr5-epas13, before installing the PGD 6 packages. The commands below were used for the RHEL 8 platform. Use the appropriate commands for your specific platform.
dnf remove edb-pgd5-cli dnf install edb-as17-server edb-pgd6-essential-epas17 -y
Pre-upgrade steps
Version Check
Check the current version of the cluster (optional).
pgd-a1:/home/rocky $ pgd nodes list --versions
Output:
Node Name BDR Version Postgres Version ---------- ------------ ---------------- pgd-a1 5.9.0 13.22.28 pgd-a2 5.9.0 13.22.28 witness-a1 5.9.0 13.22.28
Move to Connection Manager
PGD 5 uses PGD Proxy for routing. In PGD 6, PGD Proxy has been replaced with Connection Manager. When you upgrade from PGD 5 to PGD 6, use the following steps to move to Connection Manager. See PGD 5 - Moving from PGD Proxy to Connection Manager.
Write leader node verification
Ensure that the node you want to upgrade is not the write leader node.
pgd-a1:/home/rocky $ pgd group group-a show --summary
Output:
Group Property Value ----------------- ------- Group Name group-a Parent Group Name dc-1 Group Type data Write Leader pgd-a2 Commit Scope
The current write leader is node pgd-a2, so we are good to upgrade node pgd-a1.
Switch the write leader to a different node if it is the node to be upgraded.
Use the pgd group set-leader
command to switch the write leader if required:
witness-a1:/home/rocky $ /usr/edb/as17/bin/pgd group group-a set-leader pgd-a1
witness-a1:/home/rocky $ /usr/edb/as17/bin/pgd group group-a show --summary
Output:
Group Property | Value -------------------+--------- Group Name | group-a Parent Group Name | dc-1 Group Type | data Write Leader | pgd-a1 Commit Scope |
Fence the node
Fence a node in this cluster with pgd node <node-name> set-option route_fence true
so that it does not become the write leader.
Initialize the new Postgres instance
Execute the initdb utility to initialize the new server. Ensure the --data-checksums
option is set.
/usr/edb/as17/bin/initdb -D /var/lib/edb/as17/data -E UTF8 --data-checksums
Create the new data dir if you don't want to use the default one. This example uses the default for simplicity.
Migrate configuration to the new Postgres version
Copy the following files and directory (if present), to the new data directory you created for the upgraded version of PostgreSQL:
postgresql.conf
postgresql.auto.conf
pg_hba.conf
- the
conf.d
directory
cp /opt/postgres/data/postgresql.conf /var/lib/edb/as17/data/ cp /opt/postgres/data/postgresql.auto.conf /var/lib/edb/as17/data/ cp /opt/postgres/data/pg_hba.conf /var/lib/edb/as17/data/
If you have a TPA-deployed cluster, copy the conf.d
directory as well:
cp -r /opt/postgres/data/conf.d /var/lib/edb/as17/data/
Unsupported configurations in PGD 6
Some configurations may not be supported in PGD 6. In such cases, you will need to find an equivalent setting or determine if the configuration can be safely ignored.
For instance, you may encounter the operator_precedence_warning
GUC, which can be ignored in the new configuration.
Ensure both the old and new servers are shut down:
sudo systemctl stop postgres sudo systemctl status postgres
Note
systemctl commands were used in this example because the PostgreSQL instance is configured as a service. You might need to use the pg_ctl
utility if your setup is different.
Dry run check
Before running the actual upgrade, perform a dry run to check the compatibility and upgrade feasibility. The pgd node upgrade
tool has a --check
option, which performs a dry run of some of the upgrade process. You can use this option to ensure the upgrade goes smoothly. Run the upgrade command with the --check
option.
/usr/edb/as17/bin/pgd node pgd-a1 upgrade \ --database bdrdb -B /usr/edb/as17/bin \ --socketdir /tmp \ --old-bindir /usr/edb/as13/bin \ --old-datadir /opt/postgres/data \ --new-datadir /var/lib/edb/as17/data \ --username enterprisedb \ --old-port 5444 \ --new-port 5444 \ --check
A successful check should return output as shown:
Performing BDR Postgres Checks ------------------------------ Getting old PG instance shared directory ok Getting new PG instance shared directory ok Collecting pre-upgrade new PG instance control data ok Checking new cluster state is shutdown ok Checking BDR extension versions ok Checking Postgres versions ok Finished BDR pre-upgrade steps, calling pg_upgrade -------------------------------------------------- Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for contrib/isn with bigint-passing mismatch ok Checking data type usage ok Checking for user-defined encoding conversions ok Checking for user-defined postfix operators ok Checking for incompatible polymorphic functions ok Checking for not-null constraint inconsistencies ok Checking for presence of required libraries ok Checking database user is the install user ok Checking for prepared transactions ok Checking for new cluster tablespace directories ok *Clusters are compatible*
Execute the upgrade.
If the dry run check passed, you can execute the upgrade by running the command without the --check
option
/usr/edb/as17/bin/pgd node pgd-a1 upgrade \ --database bdrdb -B /usr/edb/as17/bin \ --socketdir /tmp \ --old-bindir /usr/edb/as13/bin \ --old-datadir /opt/postgres/data \ --new-datadir /var/lib/edb/as17/data \ --username enterprisedb \ --old-port 5444 \ --new-port 5444
A successful upgrade should return output as shown:
Performing BDR Postgres Checks ------------------------------ Getting old PG instance shared directory ok Getting new PG instance shared directory ok Collecting pre-upgrade new PG instance control data ok Checking new cluster state is shutdown ok Checking BDR extension versions ok Checking Postgres versions ok Collecting Pre-Upgrade BDR Information -------------------------------------- Collecting pre-upgrade old PG instance control data ok Connecting to the old PG instance ok Checking for BDR extension ok Checking BDR node name ok Terminating connections to database ok Waiting for all slots to be flushed ok Disconnecting from old cluster PG instance ok Stopping old PG instance ok Starting old PG instance with BDR disabled ok Connecting to the old PG instance ok Collecting replication origins ok Collecting replication slots ok Disconnecting from old cluster PG instance ok Stopping old PG instance ok Finished BDR pre-upgrade steps, calling pg_upgrade -------------------------------------------------- Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for contrib/isn with bigint-passing mismatch ok Checking data type usage ok Checking for user-defined encoding conversions ok Checking for user-defined postfix operators ok Checking for incompatible polymorphic functions ok Checking for not-null constraint inconsistencies ok Creating dump of global objects ok Creating dump of database schemas ok Checking for presence of required libraries ok Checking database user is the install user ok Checking for prepared transactions ok Checking for new cluster tablespace directories ok If `pg_upgrade` fails after this point, you must re-initdb the new cluster before continuing. Performing Upgrade ------------------ Setting locale and encoding for new cluster ok Analyzing all rows in the new cluster ok Freezing all rows in the new cluster ok Deleting files from new pg_xact ok Copying old pg_xact to new server ok Setting oldest XID for new cluster ok Setting next transaction ID and epoch for new cluster ok Deleting files from new pg_multixact/offsets ok Copying old pg_multixact/offsets to new server ok Deleting files from new pg_multixact/members ok Copying old pg_multixact/members to new server ok Setting next multixact ID and offset for new cluster ok Resetting WAL archives ok Setting frozenxid and minmxid counters in new cluster ok Restoring global objects in the new cluster ok Restoring database schemas in the new cluster ok Copying user relation files ok Setting next OID for new cluster ok Sync data directory to disk ok Creating script to delete old cluster ok Checking for extension updates notice Your installation contains extensions that should be updated with the ALTER EXTENSION command. The file update_extensions.sql when executed by psql by the database superuser will update these extensions. Upgrade Complete ---------------- Optimizer statistics are not transferred by pg_upgrade. Once you start the new server, consider running: /usr/edb/as17/bin/vacuumdb -U enterprisedb --all --analyze-in-stages Running this script will delete the old cluster's data files: ./delete_old_cluster.sh pg_upgrade complete, performing BDR post-upgrade steps ------------------------------------------------------ Collecting post-upgrade old PG instance control data ok Collecting post-upgrade new PG instance control data ok Checking LSN of the new PG instance ok Starting new PG instance with BDR disabled ok Connecting to the new PG instance ok Creating replication origin bdr_bdrdb_dc_1_pgd_a2 ok Advancing replication origin bdr_bdrdb_dc_1_pgd_a2 to 0/3... ok Creating replication origin bdr_bdrdb_dc_1_witness_a1 ok Advancing replication origin bdr_bdrdb_dc_1_witness_a1 to... ok Creating replication slot bdr_bdrdb_dc_1 ok Creating replication slot bdr_bdrdb_dc_1_witness_a1 ok Creating replication slot bdr_bdrdb_dc_1_pgd_a2 ok Stopping new PG instance ok
Note
You can use the --link
option for a hard link. This option works if both the data dirs are on the same filesystem. For more information, see pg_upgrade in the PostgreSQL documentation.
Post-upgrade steps
Update the Postgres service file
Update the server version, data directory, and binary directories of the new server in the PostgreSQL service file, located at /etc/systemd/system/postgres.service
.
An example of what the updated service file looks like:
[Unit] Description=Postgres 17 (TPA) After=syslog.target After=network.target [Service] Type=simple User=enterprisedb Group=enterprisedb OOMScoreAdjust=-1000 Environment=PG_OOM_ADJUST_VALUE=0 Environment=PGDATA=/var/lib/edb/as17/data StandardOutput=syslog ExecStart=/usr/edb/as17/bin/edb-postgres -D ${PGDATA} -c config_file=/var/lib/edb/as17/data/postgresql.conf ExecStartPost=+/bin/bash -c 'echo 0xff > /proc/$MAINPID/coredump_filter' ExecReload=/bin/kill -HUP $MAINPID KillMode=mixed KillSignal=SIGINT Restart=no LimitCORE=infinity [Install] WantedBy=multi-user.target
Start the postgres service
Execute a daemon-reload and start the Postgres service:
systemctl daemon-reload systemctl start postgres
Note
If your server was not running as a service, you can skip the service file update and start the server using the pg_ctl utility.
Verify the upgraded cluster versions
Use the following command to verify the upgraded cluster versions:
pgd-a1:/home/rocky $ /usr/edb/as17/bin/pgd nodes list --versions
Output:
Node Name | BDR Version | Postgres Version ------------+---------------------+------------------ pgd-a1 | PGD 6.1.0 Essential | 17.6.0 pgd-a2 | 5.9.0 | 13.22.28 witness-a1 | 5.9.0 | 13.22.28
The BDR version for node pgd-a1 was upgraded to 6.1.0 and the Postgres version to 17.6.0.
Unfence the node
Unfence a node in this cluster with pgd node <node-name> set-option route_fence false
so that it does not become the write leader.
Verify the Connection Manager is working
Execute a query through the connection manager port (6444 by default) on the upgraded node:
pgd-a1:/home/rocky $ psql "host=pgd-a1 port=6444 dbname=bdrdb user=enterprisedb " -c "select node_name from bdr.local_node_summary;"
Output:
node_name ----------- pgd-a2 (1 row)
Clean up and vacuum analyze
As a best practice, run a vacuum over the database at this point. When the upgrade ran, you may have noticed the post-upgrade report included:
Upgrade Complete ---------------- Optimizer statistics are not transferred by pg_upgrade. Once you start the new server, consider running: /usr/edb/as17/bin/vacuumdb -U enterprisedb --all --analyze-in-stages Running this script will delete the old cluster's data files: ./delete_old_cluster.sh
You can run the vacuum now. On the target node, run:
/usr/edb/as17/bin/vacuumdb -U enterprisedb --all --analyze-in-stages
If you're sure you don't need to revert this node, you can also clean up the old cluster's data files:
./delete_old_cluster.sh
Upgrade the remaining nodes
You must perform these steps for every node in the cluster. The only difference will be the node name in the upgrade command. For quick reference, the commands for nodes pgd-a2 and witness-a1 are provided:
Node pgd-a2
/usr/edb/as17/bin/pgd node pgd-a2 upgrade \ --database bdrdb -B /usr/edb/as17/bin \ --socketdir /tmp \ --old-bindir /usr/edb/as13/bin \ --old-datadir /opt/postgres/data \ --new-datadir /var/lib/edb/as17/data \ --username enterprisedb \ --old-port 5444 \ --new-port 5444
Node witness-a1
/usr/edb/as17/bin/pgd node witness-a1 upgrade \ --database bdrdb -B /usr/edb/as17/bin \ --socketdir /tmp \ --old-bindir /usr/edb/as13/bin \ --old-datadir /opt/postgres/data \ --new-datadir /var/lib/edb/as17/data \ --username enterprisedb \ --old-port 5444 \ --new-port 5444
Verify the final state of the cluster
Use the following command to verify the node versions:
pgd-a2:/home/rocky $ /usr/edb/as17/bin/pgd nodes list --versions
Output:
Node Name | BDR Version | Postgres Version ------------+----------------------+------------------ pgd-a1 | PGD 6.1.0 Essential | 17.6.0 pgd-a2 | PGD 6.1.0 Essential | 17.6.0 witness-a1 | PGD 6.1.0 Essential | 17.6.0
All nodes of the cluster have been upgraded to PGD 6.1.0 and EPAS 17.6.0.
Verify the Connection Manager
For every data node, use the following command to verify the Connection Manager:
pgd-a2:/home/rocky $ psql "host=pgd-a2 port=6444 dbname=bdrdb user=enterprisedb " -c "select node_name from bdr.local_node_summary;"
Output:
node_name ----------- pgd-a1 (1 row)
pgd-a2:/home/rocky $ psql "host=pgd-a2 port=6445 dbname=bdrdb user=enterprisedb " -c "select node_name from bdr.local_node_summary;"
Output:
node_name ----------- pgd-a2 (1 row)
Note
This completes the worked example of an in-place major version rolling upgrade (PGD & Postgres) of a 3-node PGD 5, EPAS 13 cluster to PGD 6, EPAS 17 cluster.
- On this page
- Upgrading Postgres major versions
- Worked example: Upgrade PGD 4 to PGD 6.1
- Confirm the harp-proxy leader
- Fence the node
- Stop the Postgres service
- Stop HARP manager
- Remove and install packages
- Start the Postgres service
- Start HARP manager
- Unfence the node
- Repeat steps for all nodes
- Confirm cluster version
- Confirm SCRAM hashes
- Enable routing
- Switch to Connection Manager
- Stop harp manager and proxy services.
- Worked example: Upgrade PGD 5 to PGD 6.1
- Prerequisites
- Pre-upgrade steps
- Post-upgrade steps