From 414be92bd8c8c0404da0e245803aea15d4f85814 Mon Sep 17 00:00:00 2001 From: Stephen Ndegwa Date: Fri, 7 Feb 2025 09:31:33 +0300 Subject: [PATCH] Add tutorial on fully duplicating a server using dd --- tutorials/duplicate-server-dd/01.en.md | 823 +++++++++++++++++++++++++ 1 file changed, 823 insertions(+) create mode 100644 tutorials/duplicate-server-dd/01.en.md diff --git a/tutorials/duplicate-server-dd/01.en.md b/tutorials/duplicate-server-dd/01.en.md new file mode 100644 index 000000000..83c52cddb --- /dev/null +++ b/tutorials/duplicate-server-dd/01.en.md @@ -0,0 +1,823 @@ +SPDX-License-Identifier: MIT +path: "/tutorials/duplicate-server-dd" +slug: "duplicate-server-dd" +date: "2025-02-06" +title: "How to Fully Duplicate a Server to Another Using dd" +short_description: "Learn how to duplicate a server using the dd command for raw data transfer between two servers in rescue mode." +tags: ["Backup", "Server Duplication", "dd", "Linux"] +author: "Stephen Ndegwa" +author_link: "https://github.com/stephenndegwa" +author_img: "https://avatars.githubusercontent.com/u/105418748" +author_description: "System administrator with expertise in Linux and high-availability RAID configurations." +language: "en" +available_languages: ["en"] +header_img: "header-raid" +cta: "product" +--- + +## **Introduction** + +Duplicating a server is a critical process when migrating data, setting up redundancy, or recovering from a disaster. This tutorial provides a step-by-step guide on how to duplicate a Linux server using the `dd` command. The method involves performing raw data transfers between two servers operating in rescue mode, ensuring a complete byte-for-byte copy. + +### **Prerequisites** +Before starting, ensure you have: +1. Two servers in **rescue mode**: one as the **source** and the other as the **destination**. +2. Access to the `dd` command and SSH on both servers. +3. Knowledge of the server’s disk partition and RAID configuration. +4. The ability to work in a `chroot` environment. + +--- + + +## **Scenario** + +We are duplicating a server from the **source server** (`<203.0.113.1>`) to the **destination server** (`<203.0.113.2>`). Both servers must be in rescue mode, and SSH access must be configured between them. + +--- + +## **Step 1: Preparing the Servers** + +### **Step 1.1: Boot Both Servers into Rescue Mode** + +1. Access your hosting provider’s control panel for both servers. +2. Enable **rescue mode** for the source and destination servers. +3. Note the rescue mode credentials: + - **Username**: `root` + - **Password**: [provided by the hosting provider]. + - **IP Addresses**: + - **Source Server**: `<203.0.113.1>` + - **Destination Server**: `<203.0.113.2>` + +4. Log in to both servers via SSH: + ```bash + ssh root@<203.0.113.1> # Source Server + ssh root@<203.0.113.2> # Destination Server + ``` + +--- + +### **Step 1.2: Verify Disk Layout** + +#### **On the Source Server (`<203.0.113.1>`)** + +Run the following command: +```bash +lsblk +``` + +**Example Output**: +```plaintext +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS +nvme0n1 259:0 0 476.9G 0 disk +├─nvme0n1p1 259:1 0 32G 0 part +│ └─md0 9:0 0 32G 0 raid1 +├─nvme0n1p2 259:2 0 1G 0 part +│ └─md1 9:1 0 1022M 0 raid1 +└─nvme0n1p3 259:3 0 443.9G 0 part + └─md2 9:2 0 443.8G 0 raid1 +``` + +#### **On the Destination Server (`<203.0.113.2>`)** + +Run the same command: +```bash +lsblk +``` + +**Example Output**: +```plaintext +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS +nvme0n1 259:0 0 953.9G 0 disk +nvme1n1 259:1 0 953.9G 0 disk +``` + +--- + +### **Step 1.3: Verify Disk Space** + +1. The destination server disks are larger (953.9 GB) than the source server disks (476.9 GB), ensuring sufficient space for duplication. +2. Partition 3 on the destination server will later be extended to utilize the additional disk space. + +--- + +### **Step 1.4: Verify SSH Connectivity** + +1. From the **source server**, test SSH connectivity to the destination server: + ```bash + ssh root@<203.0.113.2> + ``` + - Accept the SSH key if prompted. + - Enter the rescue mode password for the destination server. + +2. Exit the SSH session: + ```bash + exit + ``` + +--- + +### **Step 1.5: Stop Existing RAID Arrays on the Destination Server** + +If RAID arrays are already configured on the destination server, stop them to prepare for new configurations: +```bash +mdadm --stop /dev/md0 +mdadm --stop /dev/md1 +mdadm --stop /dev/md2 +``` + +Verify that all RAID arrays have been stopped: +```bash +cat /proc/mdstat +``` + +**Expected Output**: +```plaintext +Personalities : +unused devices: +``` + +--- + +## **Step 2: Copying the Partition Table and Extending Partition 3** + +### **Step 2.1: Copy the Partition Table** + +#### **Export the Partition Table from the Source Server** + +On **Server 1** (`203.0.113.1`), export the partition table for both disks: +```bash +sfdisk -d /dev/nvme0n1 > nvme0n1_partition_table.txt +sfdisk -d /dev/nvme1n1 > nvme1n1_partition_table.txt +``` + +#### **Transfer the Partition Table to the Destination Server** + +Copy the partition table files to **Server 2** (`203.0.113.2`) using `scp`: +```bash +scp nvme0n1_partition_table.txt root@<203.0.113.2>:/root/ +scp nvme1n1_partition_table.txt root@<203.0.113.2>:/root/ +``` + +#### **Apply the Partition Table on the Destination Server** + +On **Server 2**, replicate the partition table to the disks: +```bash +sfdisk /dev/nvme0n1 < nvme0n1_partition_table.txt +sfdisk /dev/nvme1n1 < nvme1n1_partition_table.txt +``` + +Verify the partition layout: +```bash +lsblk +``` + +**Expected Output**: +```plaintext +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS +nvme0n1 259:0 0 953.9G 0 disk +├─nvme0n1p1 259:1 0 32G 0 part +├─nvme0n1p2 259:2 0 1G 0 part +└─nvme0n1p3 259:3 0 443.9G 0 part +nvme1n1 259:4 0 953.9G 0 disk +├─nvme1n1p1 259:5 0 32G 0 part +├─nvme1n1p2 259:6 0 1G 0 part +└─nvme1n1p3 259:7 0 443.9G 0 part +``` + +Partition 3 on both disks does not yet utilize the full available space. + +--- + +### **Step 2.2: Extend Partition 3** + +#### **Resize Partition Table** + +Use `parted` to resize Partition 3 on both disks to utilize the full disk space: +```bash +parted /dev/nvme0n1 resizepart 3 100% +parted /dev/nvme1n1 resizepart 3 100% +``` + +Verify the new partition layout: +```bash +lsblk +``` + +**Expected Output**: +```plaintext +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS +nvme0n1 259:0 0 953.9G 0 disk +├─nvme0n1p1 259:1 0 32G 0 part +├─nvme0n1p2 259:2 0 1G 0 part +└─nvme0n1p3 259:3 0 920.9G 0 part +nvme1n1 259:4 0 953.9G 0 disk +├─nvme1n1p1 259:5 0 32G 0 part +├─nvme1n1p2 259:6 0 1G 0 part +└─nvme1n1p3 259:7 0 920.9G 0 part +``` + +--- + +## **Step 3: Recreate RAID Arrays** + +In this step, we will recreate the RAID arrays on the destination server to match the source server’s configuration. + +--- + +### **Step 3.1: Stop Existing RAID Arrays** + +If any RAID arrays are preconfigured on **Server 2**, stop them to avoid conflicts: + +```bash +mdadm --stop /dev/md0 +mdadm --stop /dev/md1 +mdadm --stop /dev/md2 +``` + +Verify that all RAID arrays have been stopped: +```bash +cat /proc/mdstat +``` + +**Expected Output**: +```plaintext +Personalities : +unused devices: +``` + +--- + +### **Step 3.2: Recreate RAID Arrays** + +Recreate the RAID arrays for all partitions using the appropriate devices. + +1. **RAID for `/dev/md0` (32 GB)**: + ```bash + mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/nvme0n1p1 /dev/nvme1n1p1 + ``` + +2. **RAID for `/dev/md1` (1 GB)**: + ```bash + mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/nvme0n1p2 /dev/nvme1n1p2 + ``` + +3. **RAID for `/dev/md2` (920.9 GB)**: + ```bash + mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/nvme0n1p3 /dev/nvme1n1p3 + ``` + +--- + +### **Step 3.3: Verify RAID Configuration** + +Check the status of the RAID arrays to ensure they are correctly created: +```bash +cat /proc/mdstat +``` + +**Expected Output**: +```plaintext +Personalities : [raid1] +md0 : active raid1 nvme1n1p1[1] nvme0n1p1[0] + 33520640 blocks super 1.2 [2/2] [UU] + +md1 : active raid1 nvme1n1p2[1] nvme0n1p2[0] + 1046528 blocks super 1.2 [2/2] [UU] + +md2 : active raid1 nvme1n1p3[1] nvme0n1p3[0] + 964689920 blocks super 1.2 [2/2] [UU] + +unused devices: +``` + +--- + +### **Step 3.4: Save RAID Configuration** + +To ensure the RAID configuration persists after reboot, follow these steps: + +1. **Scan and Save RAID Configuration**: + ```bash + mdadm --detail --scan >> /etc/mdadm/mdadm.conf + ``` + +2. **Verify the Configuration**: + Open `/etc/mdadm/mdadm.conf` to confirm that the RAID arrays are listed correctly: + ```bash + nano /etc/mdadm/mdadm.conf + ``` + + Example output: + ```plaintext + ARRAY /dev/md0 metadata=1.2 UUID=12345678:9abcdef0:12345678:9abcdef0 + ARRAY /dev/md1 metadata=1.2 UUID=87654321:0fedcba9:87654321:0fedcba9 + ARRAY /dev/md2 metadata=1.2 UUID=56789012:abcdef34:56789012:abcdef34 + ``` + +3. **Update Initramfs**: + Regenerate the initramfs to include the new RAID configuration: + ```bash + update-initramfs -u + ``` + +--- + + +## **Step 4: Transferring Data Using `dd`** + +This step covers transferring data from the **source server** (`<203.0.113.1>`) to the **destination server** (`<203.0.113.2>`) using the `dd` command. Each transfer runs inside a `screen` session to ensure uninterrupted execution. + +--- + +### **Step 4.1: Using `screen` for Transfers** + +#### **Why Use `screen`?** +- Ensures the transfer continues even if the SSH session is interrupted. +- Allows you to reattach and monitor the transfer progress at any time. + +--- + +### **Step 4.2: Transfer Commands for Each RAID Partition** + +Perform data transfer for each RAID partition by creating a dedicated `screen` session for each. + +--- + +#### **Transfer `/dev/md0` (32 GB - Swap)** + +1. Start a `screen` session for the transfer: + ```bash + screen -S transfer_md0 + ``` + +2. Run the `dd` command to transfer data: + ```bash + dd if=/dev/md0 bs=64K status=progress | ssh root@<203.0.113.2> "dd of=/dev/md0 bs=64K status=progress" + ``` + Enter the destination server password if prompted. + +3. Detach the session: + - Press `Ctrl + A`, then `D`. + +--- + +#### **Transfer `/dev/md1` (1 GB - Boot)** + +1. Start a new `screen` session: + ```bash + screen -S transfer_md1 + ``` + +2. Run the `dd` command: + ```bash + dd if=/dev/md1 bs=64K status=progress | ssh root@<203.0.113.2> "dd of=/dev/md1 bs=64K status=progress" + ``` + +3. Detach the session: + - Press `Ctrl + A`, then `D`. + +--- + +#### **Transfer `/dev/md2` (920.9 GB - Root)** + +1. Start a new `screen` session: + ```bash + screen -S transfer_md2 + ``` + +2. Run the `dd` command: + ```bash + dd if=/dev/md2 bs=64K status=progress | ssh root@<203.0.113.2> "dd of=/dev/md2 bs=64K status=progress" + ``` + +3. Detach the session: + - Press `Ctrl + A`, then `D`. + +--- + +### **Step 4.3: Monitoring Transfers** + +1. Check active `screen` sessions: + ```bash + screen -ls + ``` + + **Example Output**: + ```plaintext + There are screens on: + 12345.transfer_md0 (Detached) + 12346.transfer_md1 (Detached) + 12347.transfer_md2 (Detached) + ``` + +2. Reattach to a session to monitor the transfer: + ```bash + screen -r transfer_md0 + ``` + +3. Detach the session after monitoring: + - Press `Ctrl + A`, then `D`. + +--- + +### **Step 4.4: Sample Progress Output** + +As the transfer progresses, you will see real-time updates: +```plaintext +4294967296 bytes (4.3 GB, 4.0 GiB) copied, 35 s, 123 MB/s +``` + +Once the transfer completes: +```plaintext +42949672960 bytes (43 GB, 40 GiB) copied, 400.12 s, 107 MB/s +``` + +--- + +### **Step 4.5: Verifying Data Integrity** + +To ensure the data transfer was successful, compare the checksums of the RAID partitions between the source and destination servers. + +#### **Generate Checksums on the Source Server** + +Run these commands on the source server: +```bash +dd if=/dev/md0 bs=64K | md5sum +dd if=/dev/md1 bs=64K | md5sum +dd if=/dev/md2 bs=64K | md5sum +``` + +#### **Generate Checksums on the Destination Server** + +Run the same commands on the destination server: +```bash +dd if=/dev/md0 bs=64K | md5sum +dd if=/dev/md1 bs=64K | md5sum +dd if=/dev/md2 bs=64K | md5sum +``` + +#### **Compare Checksums** + +Ensure the checksums match between the source and destination servers. Matching checksums confirm the data was transferred accurately. + +--- + + + +## **Step 5: Post-Transfer Configuration and Boot Testing** +This step ensures the destination server is properly configured and ready to boot independently. We will finalize configurations, update system files, and test the boot process. At this point, the data transfer from Server 1 is complete. + +--- + +#### **Step 5.1: Enter the `chroot` Environment** + +Prepare the `chroot` environment to update system configurations. + +1. Mount the necessary filesystems: + ```bash + mount /dev/md2 /mnt + mount /dev/md1 /mnt/boot + mount --bind /dev /mnt/dev + mount --bind /proc /mnt/proc + mount --bind /sys /mnt/sys + ``` + +2. Enter the `chroot` environment: + ```bash + chroot /mnt + ``` + +--- + +#### **Step 5.2: Update IP Addresses** + +Configure the network with the new IP addresses. + +1. **For Systems Using Netplan**: + - Open the Netplan configuration file: + ```bash + nano /etc/netplan/01-netcfg.yaml + ``` + + - Update the IP address, gateway, and nameservers to reflect the new network environment: + ```yaml + ### Hetzner Online GmbH installimage + network: + version: 2 + renderer: networkd + ethernets: + enp0s31f6: + addresses: + - 203.0.113.2/32 + - 22001:db8:5678::3/64 + routes: + - on-link: true + to: 0.0.0.0/0 + via: 192.0.2.254 + - to: default + via: fe80::1 + nameservers: + addresses: + - 8.8.8.8 + - 8.8.4.4 + ``` + + - Apply the changes: + ```bash + netplan apply + ``` + +2. **For Systems Using CentOS or RHEL**: + - Open the network configuration file: + ```bash + nano /etc/sysconfig/network-scripts/ifcfg-eth0 + ``` + + - Update the configuration: + ```plaintext + BOOTPROTO=static + IPADDR=203.0.113.2 + NETMASK=255.255.255.0 + GATEWAY=192.0.2.254 + DNS1=8.8.8.8 + DNS2=8.8.4.4 + ``` + + - Restart the network service: + ```bash + systemctl restart network + ``` + +--- + +#### **Step 5.3: Update Hostname** + +1. Set the new hostname: + ```bash + echo "new-hostname" > /etc/hostname + ``` + +2. Update `/etc/hosts`: + ```bash + nano /etc/hosts + ``` + + Add or modify entries: + ```plaintext + 127.0.0.1 localhost + 203.0.113.2 new-hostname + ``` + +3. Save and exit. + +--- + +#### **Step 5.4: Use Old Server UUIDs** + +Instead of generating new UUIDs, reuse the source server’s UUIDs. + +1. **Retrieve Current UUIDs**: + Run the `blkid` command: + ```bash + blkid + ``` + + **Sample Output**: + ```plaintext + /dev/md2: UUID="5d92a989-948e-434e-95c7-b41f1dd0a8a4" BLOCK_SIZE="4096" TYPE="ext4" + /dev/md1: UUID="9d330244-03d6-4413-b970-6569f35b83e5" BLOCK_SIZE="4096" TYPE="ext3" + /dev/md0: UUID="addc9bf1-8d05-44b3-a6bb-387300443655" TYPE="swap" + ``` + +2. **Update `/etc/fstab`**: + Open `/etc/fstab`: + ```bash + nano /etc/fstab + ``` + + Replace UUIDs with the values from `blkid`: + ```plaintext + proc /proc proc defaults 0 0 + # /dev/md/0 + UUID=addc9bf1-8d05-44b3-a6bb-387300443655 none swap sw 0 0 + # /dev/md/1 + UUID=9d330244-03d6-4413-b970-6569f35b83e5 /boot ext3 defaults 0 0 + # /dev/md/2 + UUID=5d92a989-948e-434e-95c7-b41f1dd0a8a4 / ext4 usrjquota=quota.user,jqfmt=vfsv1 0 0 + /usr/tmpDSK /tmp ext4 defaults,noauto 0 0 + ``` + +3. Save and exit. + +--- + +#### **Step 5.5: Rebuild GRUB** + +1. Install GRUB on both disks: + ```bash + grub-install /dev/nvme0n1 + grub-install /dev/nvme1n1 + ``` + +2. Regenerate the GRUB configuration: + ```bash + grub-mkconfig -o /boot/grub/grub.cfg + ``` + +--- + +#### **Step 5.6: Exit and Unmount** + +1. Exit the `chroot` environment: + ```bash + exit + ``` + +2. Unmount all filesystems: + ```bash + umount /mnt/dev + umount /mnt/proc + umount /mnt/sys + umount /mnt/boot + umount /mnt + ``` + +--- + + +## **Step 6: Final Testing and Reboot** + +In this step, we perform a series of validations to ensure the destination server is configured correctly and ready for production use. After verification, the server is rebooted, and post-reboot checks are conducted. + +--- + +### **Step 6.1: Validate Configuration** + +Before rebooting the destination server, verify the critical system configurations. + +1. **Check `/etc/fstab`:** + Ensure the file contains the correct UUIDs and mount points: + ```bash + cat /etc/fstab + ``` + + **Expected Output Example:** + ```plaintext + proc /proc proc defaults 0 0 + # /dev/md/0 + UUID=addc9bf1-8d05-44b3-a6bb-387300443655 none swap sw 0 0 + # /dev/md/1 + UUID=9d330244-03d6-4413-b970-6569f35b83e5 /boot ext3 defaults 0 0 + # /dev/md/2 + UUID=5d92a989-948e-434e-95c7-b41f1dd0a8a4 / ext4 usrjquota=quota.user,jqfmt=vfsv1 0 0 + /usr/tmpDSK /tmp ext4 defaults,noauto 0 0 + ``` + +2. **Check GRUB Configuration:** + Ensure GRUB is correctly configured and the bootloader is installed: + ```bash + grub-mkconfig -o /boot/grub/grub.cfg + ``` + +3. **Verify Network Configuration:** + Confirm that the network settings are correct: + - For Netplan: + ```bash + cat /etc/netplan/01-netcfg.yaml + ``` + - For CentOS or RHEL: + ```bash + cat /etc/sysconfig/network-scripts/ifcfg-eth0 + ``` + +4. **Ensure RAID Status is Healthy:** + Check the status of RAID arrays: + ```bash + cat /proc/mdstat + ``` + + **Expected Output Example:** + ```plaintext + Personalities : [raid1] + md2 : active raid1 nvme0n1p3[0] nvme1n1p3[1] + 920896000 blocks super 1.2 [2/2] [UU] + + md1 : active raid1 nvme0n1p2[0] nvme1n1p2[1] + 1024000 blocks super 1.2 [2/2] [UU] + + md0 : active raid1 nvme0n1p1[0] nvme1n1p1[1] + 32768000 blocks super 1.2 [2/2] [UU] + ``` + +--- + +### **Step 6.2: Reboot the Server** + +1. **Initiate Reboot:** + Reboot the server to apply all changes: + ```bash + reboot + ``` + +2. **Monitor the Boot Process:** + If available, use the server provider’s management console to monitor the boot process for errors. + +--- + +### **Step 6.3: Post-Reboot Validation** + +After the server reboots, log in and validate its functionality. + +1. **Verify Mounted Filesystems:** + Ensure all partitions are mounted as defined in `/etc/fstab`: + ```bash + df -h + ``` + + **Expected Output Example:** + ```plaintext + Filesystem Size Used Avail Use% Mounted on + /dev/md2 920G 10G 910G 1% / + /dev/md1 977M 100M 877M 11% /boot + /dev/md0 32G 0 32G 0% [SWAP] + ``` + +2. **Check RAID Status:** + Confirm the RAID arrays are active and healthy: + ```bash + cat /proc/mdstat + ``` + +3. **Verify Network Connectivity:** + Ensure the server has network access and can reach external resources: + ```bash + ping -c 4 8.8.8.8 + ping -c 4 google.com + ``` + +4. **Validate Hostname and IP Address:** + - Check the hostname: + ```bash + hostname + ``` + - Confirm the IP address: + ```bash + ip a + ``` + +5. **Start Essential Services:** + If applicable, start and verify services: + ```bash + systemctl start + systemctl status + ``` + +--- + +### **Step 6.4: Verify Data Integrity** + +Ensure the transferred data matches the source server by comparing checksums. + +1. **Generate Checksums on Source Server:** + ```bash + md5sum /path/to/important/file + ``` + +2. **Generate Checksums on Destination Server:** + ```bash + md5sum /path/to/important/file + ``` + +3. **Compare Results:** + The checksums should match for data integrity to be confirmed. + + +--- +### **Conclusion** + +This guide detailed the process of copying and configuring a server using `dd`, ensuring a seamless migration of data and configurations from a source server to a destination server. + +--- + +### License: MIT + + \ No newline at end of file