Skip to content

Commit

Permalink
[docs][cloud] Clarify Resilience of Replicate across regions topology (
Browse files Browse the repository at this point in the history
…yugabyte#15512)

* update yb-ctl to yugabyted

* Clarify resilience of Replicate across regions

* minor edit

* Revert changes

* Revert
  • Loading branch information
ddhodge authored Jan 6, 2023
1 parent 4934c0c commit 2b9ce04
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 5 deletions.
10 changes: 7 additions & 3 deletions docs/content/preview/explore/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ Start a local three-node cluster with a replication factor of `3` by first creat
./bin/yugabyted start \
--advertise_address=127.0.0.1 \
--base_dir=/tmp/ybd1 \
--cloud_location=aws.us-east.us-east-1a
--cloud_location=aws.us-east-2.us-east-2a
```

On MacOS and Linux, the additional nodes need loopback addresses configured:
Expand All @@ -169,22 +169,26 @@ Next, join two more nodes with the previous node. By default, [yugabyted](../ref
./bin/yugabyted start \
--advertise_address=127.0.0.2 \
--base_dir=/tmp/ybd2 \
--cloud_location=aws.us-east.us-east-2a \
--cloud_location=aws.us-east-2.us-east-2b \
--join=127.0.0.1
```

```sh
./bin/yugabyted start \
--advertise_address=127.0.0.3 \
--base_dir=/tmp/ybd3 \
--cloud_location=aws.us-east.us-east-3a \
--cloud_location=aws.us-east-2.us-east-2c \
--join=127.0.0.1
```

After starting the yugabyted processes on all the nodes, configure the data placement constraint of the cluster as follows:

```sh
./bin/yugabyted configure --fault_tolerance=zone --base_dir=/tmp/ybd1
```

This command can be executed on any node where you already started YugabyteDB.

To destroy the multi-node cluster, do the following:

```sh
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ In a cluster that is replicated across regions, the nodes of the cluster are dep

![Single cluster deployed across three regions](/images/yb-cloud/Geo-Distribution-Blog-Post-Image-2.png)

**Resilience**: Putting cluster nodes in different regions provides a higher degree of failure independence. In the event of a failure, the database cluster continues to serve data requests from the remaining regions while automatically replicating the data in the background to maintain the desired level of resilience.
**Resilience**: Putting cluster nodes in different regions provides a higher degree of failure independence. In the event of a region failure, the database cluster continues to serve data requests from the remaining regions. YugabyteDB automatically performs a failover to the nodes in the other two regions, and the tablets being failed over are evenly distributed across the two remaining regions.

**Consistency**: All writes are synchronously replicated. Transactions are globally consistent.

Expand Down Expand Up @@ -220,7 +220,7 @@ For applications that have writes happening from a single zone or region but wan

![Read replicas](/images/yb-cloud/Geo-Distribution-Blog-Post-Image-6.png)

**Resilience**: If you deploy the nodes of the primary cluster across zones, you get zone-level resilience. Read replicas don't participate in the Raft consistency protocol and therefore don't affect resilience.
**Resilience**: If you deploy the nodes of the primary cluster across zones or regions, you get zone- or region-level resilience. Read replicas don't participate in the Raft consistency protocol and therefore don't affect resilience.

**Consistency**: The data in the replica clusters is timeline consistent, which is better than eventual consistency.

Expand Down

0 comments on commit 2b9ce04

Please sign in to comment.