Fault tolerance of PostgreSQL TimescaleDB cluster
By default, a PostgreSQL TimescaleDB cloud database cluster consists of a single master node. To provide fault tolerance to the cluster, you need to add replicas to the cluster . The placement of nodes in the cluster depends on the availability of replicas in the cluster and the number of segments in the pool in which the cluster is located.
Master Node
By default, the cluster consists of one main node — the master node. When connected to the master node, all operations are available: read (SELECT) and write (INSERT, UPDATE, DELETE and others).
All data changes on the master node are duplicated on the replicas. The replication process does not affect the operation of replicas and master node.
Replicas
Replicas are full copies of the master node. They are available only for reading data (SELECT). In PostgreSQL TimescaleDB cloud database clusters, one replica is always synchronous, all subsequent replicas are asynchronous.
The presence of replicas in the cluster ensures its fault tolerance — if the master node stops working, its role will be taken over by one of the replicas and the cluster will continue to operate normally. When the master node is restored, it will take over the role of the replica. The address of the master node will change.
For a replica cluster, the following applies SLA — we guarantee 99.95% write availability and 99.99% read availability.
If there are no replicas in the cluster, the cluster will be unavailable until the master node is restored. No data will be lost in this case.
We recommend creating fault-tolerant clusters with replicas or adding replicas to existing clusters.
Placement of nodes
The type of node placement in a cloud database cluster depends on the availability of replicas in the cluster and the number of segments in the pool in which the cluster is located:
- Single-AZ — in one segment of the pool. Applicable for clusters without replicas and for clusters with replicas that are in pools with a single segment;
- Multi-AZ — in different segments of the pool. Applicable for replica clusters that are in pools with multiple segments. Nodes are allocated to segments sequentially.
For example, if you have created a four-node cluster (one master node and three replicas) in pool ru-1, the first three nodes will be placed sequentially in pool segments ru-1a, ru-1b and ru-1c. The fourth node will be placed in the ru-1a segment. If you add a fifth node, it will be placed in the ru-1b pool segment.
To see how many segments are in a pool, see the instructions Countries, Regions, Availability Zones, and Pools.
Change the number of lines
You can increase or decrease the number of replicas. If you change the number of replicas, the cluster continues to operate.
- In the dashboard, on the top menu, click Products and select Cloud Databases.
- Open the Active tab.
- Open the cluster page → Settings tab.
- Click Scale Cluster.
- Specify the new number of replicas.If there are no free addresses on the subnet to which the cluster is connected, a replica cannot be added — each new replica occupies a new address on the subnet.
- Click Save.