Skip to main content
Migration of PostgreSQL databases to cloud databases

Migration of PostgreSQL databases to cloud databases

You can migrate data from your PostgreSQL database to a cloud databases Servercore with the help of logical replication or with the help of logic dump.

For your information

Before migration create a host database cluster PostgreSQL with a version no lower than that of the source cluster. If you choose to migrate using a logical dump, the cluster versions must match.

Logical replication

В logical replication uses a publication and subscription model with one or more subscribers. They subscribe to one or more publications on the publishing node. A publication is created on the external source PostgreSQL cluster, to which the receiving cloud database cluster subscribes.

  1. Prepare the source cluster.
  2. Transfer the database schema.
  3. Create a publication on the source cluster.
  4. Create a subscription on the receiving cluster.

Prepare the source cluster

  1. Add the replication privilege to a user with access to replicated data:

    ALTER ROLE <user_name> WITH REPLICATION;

    Specify <user_name> — username.

  2. In the postgresql.conf file, set the logging level (Write Ahead Log) value of logical:

    wal_level = logical
  3. In the pg_hba.conf file, configure authentication:

    host         all            all             <host>      md5
    host replication all <host> md5

    Specify <host> — The IP address or DNS name of the master host of the receiving cluster.

  4. Restart PostgreSQL to apply the changes:

    systemctl restart postgresql

Move the database schema

The source and destination cluster must have the same database schema.

  1. Create a schema dump on the source cluster using the utility pg_dump:

    pg_dump \
    -h <host> \
    -p <port> \
    -d <database_name> \
    -U <user_name> \
    --schema-only \
    --no-privileges \
    --no-subscriptions \
    --no-publications \
    -Fd -f <dump_directory>

    Specify:

    • <host> — The IP address or DNS name of the source cluster master host;
    • <port> — port;
    • <database_name> — database name;
    • <user_name> — database user name;
    • <dump_directory> — the path to the dump.
  2. Reconstruct the circuitry from the dump on the receiving cluster using the utility pg_restore:

    pg_restore \
    -Fd -v \
    --single-transaction -s \
    --no-privileges \
    -O \
    -h <host> \
    -U <user_name> \
    -p <port> \
    -d <database_name> \
    <dump_directory>

    Specify:

    • <host> — The IP address or DNS hostname of the receiving cluster;
    • <user_name> — database user name;
    • <port> — port;
    • <database_name> — database name;
    • <dump_directory> — the path to the dump.

Create a publication on the source cluster

To create a publication for all tables at once, you need superuser rights.

Create a publication for the tables you want to migrate:

CREATE PUBLICATION <publication_name> FOR ALL TABLES;

Specify <publication_name> — NAME OF PUBLICATION.

Create a subscription on the receiving cluster

In the receiving cloud database cluster, subscriptions can only be used by a user with the dbaas_admin role.

  1. Create a subscription on behalf of a user with the dbaas_admin role:

    CREATE SUBSCRIPTION <subscription_name> CONNECTION
    'host=<host>
    port=<port>
    dbname=<database_name>
    user=<user_name>
    password=<password>
    sslmode=verify-ca'
    PUBLICATION <publication_name>;

    Specify:

    • <subscription_name> — subscription name;
    • <host> — The IP address or DNS name of the source cluster master host;
    • <port> — port;
    • <user_name> — database user name;
    • <password> — user password;
    • <database_name> — database name;
    • <publication_name> — NAME OF PUBLICATION.
  2. You can monitor the replication status using the catalog pg_subscription_rel:

    SELECT * FROM pg_subscription_rel;

    You can see the overall replication state in the pg_stat_subscription and pg_stat_replication views for subscriptions and publications respectively.

  3. Sequences are not replicated, so before transferring the load to the receiving cluster, restore the sequences dump to the receiving cluster if they are used. Also, remove subscriptions in the receiving cluster before transferring the load:

    DROP SUBSCRIPTION <subscription_name>;

    Specify <subscription_name> — subscription name.

Logic dump

Create a dump (a file with commands to restore) of the database in the source cluster and restore the dump in the destination cluster.

You can create SQL dump all databases (name, tables, indexes and foreign keys will be preserved) or dump in custom format (e.g., only the schema or data of a specific table can be recovered).

For your information

If you are using PgBouncer port 5433, change the pooling mode PgBouncer on session. If a different pooling mode is enabled for PgBouncer, the search_path for some connections may change, and tables may not be available by a partial name.

SQL dump

  1. Create a database dump in the source cluster using the utility pg_dump:

    pg_dump \
    -h <host> \
    -U <user_name> \
    -d <database_name> \
    -f dump.sql

    Specify:

    • <host> — The IP address or DNS name of the source cluster master host;
    • <user_name> — username;
    • <database_name> — database name.
  2. Restore the dump in the receiving cluster using the utility psql:

    psql \
    -f dump.sql \
    -h <host> \
    -p <port> \
    -U <user_name> \
    -d <database_name>

    Specify:

    • <host> — The IP address or DNS name of the master host of the receiving cluster;
    • <port> — port;
    • <user_name> — database user name;
    • <database_name> — database name.

Custom dump

The custom database copy is compressed by default.

  1. Create a database dump of the database in the source cluster using the pg_dump utility:

    pg_dump \
    -Fc -v \
    -h <host> \
    -U <user_name> \
    <database_name> > archive.dump

    Specify:

    • <host> — The IP address or DNS name of the source cluster master host;
    • <user_name> — database user name;
    • <database_name> — database name.
  2. Restore the dump in the receiving cluster using the utility pg_restore:

    pg_restore \
    -v
    -h <host> \
    -U <user_name>
    -d <database_name> archive.dump

    Specify:

    • <host> — The IP address or DNS name of the master host of the receiving cluster;
    • <user_name> — database user name;
    • <database_name> — database name.