diff --git a/docs/guides/applications/big-data/manually-deploy-kafka-cluster/index.md b/docs/guides/applications/big-data/manually-deploy-kafka-cluster/index.md index b2a2f3c9848..48c5ac4d981 100644 --- a/docs/guides/applications/big-data/manually-deploy-kafka-cluster/index.md +++ b/docs/guides/applications/big-data/manually-deploy-kafka-cluster/index.md @@ -2,8 +2,8 @@ slug: manually-deploy-kafka-cluster title: "Manually Deploy an Apache Kafka Cluster on Akamai" description: "Learn how to deploy and test a secure Apache Kafka cluster on Akamai using provided, customizable Ansible playbooks." -authors: ["Akamai"] -contributors: ["Akamai"] +authors: ["John Dutton","Elvis Segura"] +contributors: ["John Dutton","Elvis Segura"] published: 2024-11-20 keywords: ['apache kafka','kafka','data stream','stream management'] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' diff --git a/docs/guides/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/index.md b/docs/guides/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/index.md index ced9f1405e6..242f9dd0ad8 100644 --- a/docs/guides/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/index.md +++ b/docs/guides/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/index.md @@ -6,7 +6,7 @@ description: "Use the Open-source PgAdmin Program to Securely Manage Remote Post authors: ["Linode"] contributors: ["Linode"] published: 2010-04-30 -modified: 2018-11-29 +modified: 2024-11-21 keywords: ["pgadmin", "mac os x", "postgresql gui", "manage postgresql databases", "ssh tunnel"] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' aliases: ['/databases/postgresql/pgadmin-macos-x/','/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/'] @@ -18,62 +18,76 @@ tags: ["database","postgresql"] ![Securely Manage Remote PostgreSQL Servers with pgAdmin on Mac OS X](Securely_Manage_Remote_PostgreSQL_Servers_with_pgAdmin_on_Mac_OS_X_smg.jpg) -pgAdmin is a free, open-source PostgreSQL database administration GUI for Microsoft Windows, Apple Mac OS X and Linux systems. It offers excellent capabilities with regard to database server information retrieval, development, testing, and ongoing maintenance. This guide will help you get up and running with pgAdmin on Mac OS X, providing secure access to remote PostgreSQL databases. It is assumed that you have already installed PostgreSQL on your Linode in accordance with our [PostgreSQL installation guides](/docs/databases/postgresql/). +pgAdmin is a free, open-source PostgreSQL database administration GUI for Microsoft Windows, Apple Mac OS X, and Linux systems. It features capabilities with regard to database server information retrieval, development, testing, and ongoing maintenance. This guide provides steps to get you up and running with pgAdmin on Mac OS X, providing secure access to remote PostgreSQL databases. -## Install pgAdmin +## Before You Begin -1. Visit the [pgAdmin download page](https://www.pgadmin.org/download/pgadmin-4-macos/) to obtain the most recent version of the program. Save the installer to your desktop and launch it. Read the license agreement and click the "Agree" button to continue. +1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/products/platform/get-started/) and [Creating a Compute Instance](/docs/products/compute/compute-instances/guides/create/) guides. - ![pgAdmin on Mac OS X installer license agreement dialog](pg-admin-tos.png) +1. Follow our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access. -2. After the program has uncompressed itself, you'll see a pgAdmin icon in a Finder window. You may drag this to your Applications folder or your dock. +1. Install PostgreSQL on your Linode using one of our [PostgreSQL installation guides](/docs/databases/postgresql/). -1. After starting pgAdmin, open a new pgAdmin window by selecting the pgAdmin logo in the menu bar and selecting "New pgAdmin 4 window..." +{{< note >}} +This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Linux Users and Groups](/docs/guides/linux-users-and-groups/) guide. +{{< /note >}} - ![pgAdmin on Mac OS X menu bar icon menu](pg-admin-open-new-window.png) +## Install pgAdmin - A new window will be displayed in your web browser with the pgAdmin interface. +1. Visit the [pgAdmin download page](https://www.pgadmin.org/download/pgadmin-4-macos/) to obtain the most recent version. Save the installer to your desktop and launch it. Read the license agreement and click the "Agree" to continue. -## Configure SSH Tunnel + ![pgAdmin on Mac OS X installer license agreement dialog](pg-admin-tos.png) -While PostgreSQL supports SSL connections, it is not advisable to instruct it to listen on public IP addresses unless absolutely necessary. For this reason, you'll be using the following command to create an SSH tunnel to your database server, replacing `username` with your Linux username and `remote-host` with your Linode's hostname or IP address: +1. After the program is installed, you'll see a pgAdmin icon in a Finder window. You may drag this to your Applications folder or your dock. - ssh -f -L 5433:127.0.0.1:5432 username@remote-host -N +1. Start the pgAdmin interface. A welcome page should be displayed: -Although PostgreSQL uses port 5432 for TCP connections, we're using the local port 5433 in case you decide to install PostgreSQL locally later on. + ![pgAdmin on Mac OS X menu bar icon menu](pg-admin-open-welcome-window.png) ## Use pgAdmin -1. Launch pgAdmin and you'll be presented with a default view containing no servers. Right click "Servers" and then navigate to "Create > Server". - - ![pgAdmin III default view on Mac OS X](pg-admin-new-server.png) - -2. If you're having problems connecting, you may need to check PostgreSQL's configuration to ensure it accepts connections. Modify the following lines in `/etc/postgresql/9.5/main/postgresql.conf` if necessary: - - {{< file "/etc/postgresql/9.5/main/postgresql.conf" aconf >}} -listen_addresses = 'localhost' - -port = 5432 - -{{< /file >}} +1. Open **pgAdmin 4**. +2. In the **Quick Links** section, click **Add New Server**. +3. Under the **General** tab, enter a name for your server connection. For example: `Linode PostgreSQL` +4. Navigate to the **Connection** tab: + - **Hostname/address**: `localhost`. + The SSH tunnel redirects this to the Linode server. + - **Port**: The PostgreSQL port on your Linode, typically `5432`. + - **Maintenance Database**: `postgres` or your database name. + - **Username**: Your PostgreSQL username. For example: `postgres` + - **Password**: The password for your PostgreSQL user. +5. Navigate to the **SSH Tunnel** tab: + - **Use SSH tunneling**: Enable this option. + - **Tunnel host**: Your Linode's IP address. + - **Tinnel port**: `22` . This is the default SSH port. + - **Username**: Your SSH username for the Linode instance. + - **Authentication**: Choose `Identity file` if you are using an SSH key, or `Password` for password-based authentication. + - **Identity file**: If you are using an SSH key, provide the location of the private key file. + - **Password**: If you are using password-based authentication, enter your SSH password. +6. Click **Save** to create the server connection. +### Verify Connection - Restart PostgreSQL to activate these changes. This command may vary among different distributions: +1. After saving the configuration, right-click your new server in **pgAdmin** and select **Connect**. +2. If the connection is successful, you should see your databases listed in the **Servers** panel. - sudo systemctl restart postgresql +### Troubleshooting -3. In the "Create-Server" dialog that appears, enter a name for your server. +- **SSH Access Issues**: Ensure your Linode firewall allows port `22`. - ![Supply a local name for your server.](pg-admin-server-name.png) +- **PostgreSQL Bind Address**: -4. In the "Connections" tab enter "localhost" for the "Host name/address" field, as you'll be connecting via your SSH tunnel, and set the port to 5433. In the username and password fields, enter the credentials you specified when setting up PostgreSQL. + 1. Check the PostgreSQL `postgresql.conf` file to confirm it's listening on `127.0.0.1` or `localhost`. Update `listen_addresses` if necessary: - For greater security, uncheck the "Save password" box. Click "Save" to connect to your server. + ```file + listen_addresses = 'localhost' + ``` - ![pgAdmin new server connection settings on Mac OS X](pg-admin-server-connection-settings.png) + 2. Restart PostgreSQL after making changes: -5. You will be presented with a full view of the databases that your user account has access to: + ```command + sudo systemctl restart postgresql + ``` - ![pgAdmin full database view on Mac OS X](pg-admin-database-view.png) +- **Firewall**: Ensure PostgreSQL's port (`5432`) is open for local connections. -Congratulations! You've securely connected to your remote PostgreSQL server with pgAdmin 4. diff --git a/docs/guides/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/pg-admin-open-welcome-window.png b/docs/guides/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/pg-admin-open-welcome-window.png new file mode 100644 index 00000000000..7c52794a1ce Binary files /dev/null and b/docs/guides/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x/pg-admin-open-welcome-window.png differ diff --git a/docs/guides/kubernetes/install-the-linode-ccm-on-unmanaged-kubernetes/index.md b/docs/guides/kubernetes/install-the-linode-ccm-on-unmanaged-kubernetes/index.md index 79dc8b4997d..aa9a5aa1138 100644 --- a/docs/guides/kubernetes/install-the-linode-ccm-on-unmanaged-kubernetes/index.md +++ b/docs/guides/kubernetes/install-the-linode-ccm-on-unmanaged-kubernetes/index.md @@ -6,6 +6,7 @@ og_description: "This guide includes steps for installing the Linode Cloud Contr authors: ["Linode"] contributors: ["Linode"] published: 2020-07-16 +modified: 2024-12-05 keywords: ['kubernetes','cloud controller manager','load balancing','nodebalancers'] tags: ["docker","networking","kubernetes"] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' @@ -18,21 +19,27 @@ The [Linode Cloud Controller Manager (CCM)](https://github.com/linode/linode-clo NodeBalancers provide your Kubernetes cluster with a reliable way of exposing resources to the public internet. The Linode CCM handles the creation and deletion of the NodeBalancer, and, along with other Master Plane components, correctly identifies the resources, and their networking, that the NodeBalancer will route traffic to. Whenever a Kubernetes Service of the `LoadBalancer` type is created, your Kubernetes cluster will create a Linode NodeBalancer service with the help of the Linode CCM. {{< note >}} -This guide will show you how to manually install the Linode CCM on an unmanaged Kubernetes cluster. This guide exists to support special use cases. For example, if you would like to experiment with various elements of a Kubernetes control plane. +This guide shows you how to manually install the Linode CCM on an **unmanaged** Kubernetes cluster. This guide exists to support special use cases. For example, if you would like to experiment with various elements of a Kubernetes control plane. If you would like to use Kubernetes for production scenarios and make use of Linode NodeBalancers to expose your cluster's resources, it is recommended that you [use the Linode Kubernetes Engine to deploy your cluster](/docs/products/compute/kubernetes/). An LKE cluster's control plane has the Linode CCM preinstalled and does not require any of the steps included in this guide. -Similarly, if you would like to deploy an unmanaged Kubernetes cluster on Linode, the best way to accomplish that is using [Terraform and the Linode K8s module](/docs/guides/how-to-provision-an-unmanaged-kubernetes-cluster-using-terraform/). The Linode K8s module will also include the Linode CCM preinstalled on the Kubernetes master's control plane and does not require any of the steps included in this guide. +Another option for deploying Kubernetes clusters on Linode is to use [Cluster API Provider Linode (CAPL)](https://linode.github.io/cluster-api-provider-linode/). It provisions a management Kubernetes cluster which can then be used to provision and manage multiple other child Kubernetes clusters on Linode. It installs CCM by default and supports provisioning Kubernetes clusters using kubeadm, rke2 and k3s. -If you have used the Linode Kubernetes Engine (LKE) or the Linode Terraform K8s module to deploy your cluster, you should instead refer to the [Getting Started with Load Balancing on a Linode Kubernetes Engine (LKE) Cluster](/docs/products/compute/kubernetes/guides/load-balancing/) guide for steps on adding and configuring NodeBalancers on your Kubernetes cluster. +If you have used the Linode Kubernetes Engine (LKE) or Cluster API Provider Linode (CAPL) to deploy your cluster, you should refer to the [Getting Started with Load Balancing on a Linode Kubernetes Engine (LKE) Cluster](/docs/products/compute/kubernetes/guides/load-balancing/) guide for steps on adding and configuring NodeBalancers on your Kubernetes cluster. {{< /note >}} ## In this Guide -You will manually install the Linode CCM on your unmanaged Kubernetes cluster. This will include: +Instructions are shown for manually installing the Linode CCM on your unmanaged Kubernetes cluster. This includes: - [Updating your Kubernetes cluster's configuration](#update-your-cluster-configuration) to use the CCM for Node scheduling. -- [Using a helper script to create a manifest file](#install-the-linode-ccm) that will install the Linode CCM and supporting resources on your cluster. + +- Two options for installing the Linode CCM: + + - [Using a Helm chart](#install-linode-ccm-using-helm) + + - [Using a helper script to create a manifest file](#install-linode-ccm-using-generated-manifest) + - [Updating the Linode CCM](#updating-the-linode-ccm) running on your cluster with its latest upstream changes. ### Before You Begin @@ -45,9 +52,7 @@ You will manually install the Linode CCM on your unmanaged Kubernetes cluster. T 1. Ensure you have [kubectl installed](/docs/guides/how-to-provision-an-unmanaged-kubernetes-cluster-using-terraform/#install-kubectl) on your local computer and you can access your Kubernetes cluster with it. -1. [Install Git](/docs/guides/how-to-install-git-on-linux-mac-and-windows/) on your local computer. - -1. Generate a [Linode APIv4 token](/docs/products/tools/api/get-started/#get-an-access-token). +1. Generate a [Linode APIv4 token](/docs/products/tools/api/get-started/#get-an-access-token). This is required for both methods of installing the Linode CCM in this guide. ## Running the Linode Cloud Controller Manager @@ -60,13 +65,34 @@ In order to run the Linode Cloud Controller Manager: These configurations will change the behavior of your cluster and how it interacts with its Nodes. For more details, visit the [upstream Cloud Controller documentation](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/). -### Install the Linode CCM +### Install Linode CCM using Helm -The Linode CCM's GitHub repository provides a helper script that creates a Kubernetes manifest file that you can use to install the CCM on your cluster. These steps should be run on your local computer and were tested on a macOS. +Installing the Linode CCM using Helm is the preferred method. Helm chart contents are available in [deploy/chart directory of the linode-cloud-controller-manager GitHub repository](https://github.com/linode/linode-cloud-controller-manager/tree/main/deploy/chart). -{{< note >}} -You will need your [Linode APIv4](/docs/products/tools/api/get-started/#get-an-access-token) token to complete the steps in this section. -{{< /note >}} +1. [Install Helm](https://helm.sh/docs/intro/install/) + +1. Install the `ccm-linode` repo. + + ```command + helm repo add ccm-linode https://linode.github.io/linode-cloud-controller-manager/ + helm repo update ccm-linode + ``` + +1. Deploy the `ccm-linode` Helm chart. + + ```command + export LINODE_API_TOKEN={{< placeholder "YOUR_LINODE_API_TOKEN" >}} + export REGION={{< placeholder "YOUR_LINODE_REGION" >}} + helm install ccm-linode --set apiToken=$LINODE_API_TOKEN,region=$REGION ccm-linode/ccm-linode + ``` + +For advanced configuration, one can specify their own [values.yaml](https://github.com/linode/linode-cloud-controller-manager/blob/main/deploy/chart/values.yaml) file when installing the Helm chart. + +### Install Linode CCM using Generated Manifest + +The Linode CCM's GitHub repository provides a helper script that creates a Kubernetes manifest file that you can use to install the CCM on your cluster. These steps should be run on your local computer and were tested on a macOS workstation. + +1. [Install Git](/docs/guides/how-to-install-git-on-linux-mac-and-windows/) on your local computer. 1. Clone the [Linode CCM's GitHub repository](https://github.com/linode/linode-cloud-controller-manager). @@ -98,6 +124,15 @@ You will need your [Linode APIv4](/docs/products/tools/api/get-started/#get-an-a You can create your own `ccm-linode.yaml` manifest file by editing the contents of the `ccm-linode-template.yaml` file and changing the values of the `data.apiToken` and `data.region` fields with your own desired values. This template file is located in the `deploy` directory of the Linode CCM repository. {{< /note >}} + {{< note >}} + Helm can also be used to render the ccm-linode Helm chart and apply it manually. + {{< /note >}} + + ```command + cd linode-cloud-controller-manager/ + helm template --set apiToken=$LINODE_API_TOKEN,region=$REGION deploy/chart/ + ``` + ## Updating the Linode CCM The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so: @@ -111,13 +146,13 @@ The easiest way to update the Linode CCM is to edit the DaemonSet that creates t 1. The CCM Daemonset manifest will appear in vim. Press `i` to enter insert mode. Navigate to `spec.template.spec.image` and change the field's value to the desired version tag. For instance, if you had the following image: ```file - image: linode/linode-cloud-controller-manager:v0.2.2 + image: linode/linode-cloud-controller-manager:v0.4.12 ``` - You could update the image to `v0.2.3` by changing the image tag: + You could update the image to `v0.4.20` by changing the image tag: ```file - image: linode/linode-cloud-controller-manager:v0.2.3 + image: linode/linode-cloud-controller-manager:v0.4.20 ``` For a complete list of CCM version tags, visit the [CCM DockerHub page](https://hub.docker.com/r/linode/linode-cloud-controller-manager/tags). diff --git a/docs/guides/quick-answers/linux/linux-mount-smb-share/index.md b/docs/guides/quick-answers/linux/linux-mount-smb-share/index.md index ba3bed00cc6..48c13b2b927 100644 --- a/docs/guides/quick-answers/linux/linux-mount-smb-share/index.md +++ b/docs/guides/quick-answers/linux/linux-mount-smb-share/index.md @@ -161,7 +161,7 @@ You don’t want to have to type in your credentials every time you access a sha 1. Set ownership of the credentials file to the current user by running the following command: ```command - sudo chown : + sudo chown ``` Replace `` with your username and `` with the name of your credentials file. @@ -223,4 +223,4 @@ The share should not appear in the output of this command. ## Conclusion -You now have an understanding of SMB (and CIFS), what an SMB share is, and what a mount point is. These pieces of information allow you to share remote data in a way that’s transparent to users. From the user's perspective, the resource is local to the server that they’re accessing. This guide also shows you how to use the mount and umount commands in a basic way to create and delete shares, how to create and use a credentials file to automate the sharing process to some extent, and how to automatically remount the share after a reboot. \ No newline at end of file +You now have an understanding of SMB (and CIFS), what an SMB share is, and what a mount point is. These pieces of information allow you to share remote data in a way that’s transparent to users. From the user's perspective, the resource is local to the server that they’re accessing. This guide also shows you how to use the mount and umount commands in a basic way to create and delete shares, how to create and use a credentials file to automate the sharing process to some extent, and how to automatically remount the share after a reboot. diff --git a/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend1.png b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend1.png new file mode 100644 index 00000000000..a3cdbd879ea Binary files /dev/null and b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend1.png differ diff --git a/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend2.png b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend2.png new file mode 100644 index 00000000000..de889368a4d Binary files /dev/null and b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend2.png differ diff --git a/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend3.png b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend3.png new file mode 100644 index 00000000000..490555037a1 Binary files /dev/null and b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/2024-Default-WordPress-Homepage-backend3.png differ diff --git a/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/index.md b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/index.md new file mode 100644 index 00000000000..bfb4a572081 --- /dev/null +++ b/docs/guides/uptime/loadbalancing/getting-started-with-haproxy-tcp-load-balancing-and-health-checks/index.md @@ -0,0 +1,337 @@ +--- +slug: getting-started-with-haproxy-tcp-load-balancing-and-health-checks +title: "Getting Started with HAProxy TCP Load Balancing and Health Checks" +description: "Learn how to install and configure HAProxy for load balancing and health checks on Ubuntu, CentOS Stream, and openSUSE Leap." +authors: ["Tom Henderson"] +contributors: ["Tom Henderson"] +published: 2024-08-21 +keywords: ['haproxy','haproxy load balancing','haproxy setup tutorial','haproxy active health checks','haproxy passive health checks','install haproxy on ubuntu','install haproxy on centos','install haproxy on opensuse','haproxy frontend configuration','haproxy backend configuration','haproxy health check configuration'] +license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' +external_resources: + - '[HAProxy Official Documentation](https://www.haproxy.com/documentation/)' + +--- + +[HAProxy](https://www.haproxy.org/) serves as a reverse proxy between frontend client requests and backend server resources, and can be configured at Layer 4 (network) or Layer 7 (application). A common use of HAProxy is as an intelligent network load balancer. In this role, HAProxy routes incoming frontend traffic to designated backend instances. By default, no load balancing is applied, however, HAProxy can be configured to use various load balancing methods, including: + +- **Round Robin**: Distributes incoming connections evenly across all available backend servers by sequentially assigning each new connection to the next server in the pool. +- **Least Connections**: Directs incoming connections to the backend server with the fewest active connections, helping to balance the load more evenly based on current server utilization. +- **Health Checks**: Continuously monitors the health of backend servers. Servers that fail health checks are automatically removed from the pool until they recover, ensuring that only healthy servers receive traffic. + +This guide demonstrates how to install HAProxy onto three Linux distributions: Ubuntu, CentOS Stream, and openSUSE Leap. It also uses an example WordPress deployment with sample configurations to implement and test HAProxy's TCP load balancing and health check features. + +## Before You Begin + +1. To be used as your HAProxy instance, deploy a Compute Instance running one of the `Ubuntu 24.04 LTS`, `CentOS Stream 9`, or `openSUSE Leap 15.6` distributions, and assign the instance to a VLAN. See our [Getting Started with Linode](/docs/products/platform/get-started/) and [Creating a Compute Instance](/docs/products/compute/compute-instances/guides/create/) guides. + + HAProxy can be deployed using a [Nanode](https://www.linode.com/pricing/) plan for testing purposes. See HAProxy's [hardware recommendations](https://www.haproxy.com/documentation/haproxy-enterprise/getting-started/installation/linux/#hardware-recommendations) for production-level workloads. + +1. Follow our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access. + +1. This guide uses WordPress backend instances to demonstrate how HAProxy controls network traffic flows at both the TCP/Network (Layer 4) and HTTP/Application (Layer 7) levels. Follow the steps in our [Deploy WordPress through the Linode Marketplace](/docs/marketplace-docs/guides/wordpress/) guide to create three backend WordPress test instances. Fill out all required fields under **WordPress Setup**, and use default values along with the following options: + + - **The stack you are looking to deploy Wordpress on**: Choose either **LAMP** or **LEMP**. + - **Website title**: For each instance, enter `backend1`, `backend2`, and `backend3`, respectively. + - **Region**: Select the same location the HAProxy instance is in. + - **Linode Plan**: A **Shared CPU**, **Nanode 1 GB** is sufficient to test and demonstrate HAProxy options. + - **Linode Label**: Label each instance to correspond with the website titles `backend1`, `backend2`, and `backend3`, respectively. + - **VLAN**: Attach the instances to the same VLAN as the HAProxy instance. + + Each server is generated with an `index.html` home page that indicates the given title of the website hosted on the instance (`backend1`, `backend2`, or `backend3`). Open a web browser and navigate to each server's IP address to verify that the example test servers are functioning. Take note of the IP addresses of each backend instance, as they are used later. + +{{< note >}} +This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Users and Groups](/docs/guides/linux-users-and-groups/) guide. +{{< /note >}} + +## Install HAProxy + +To install HAProxy, log into the HAProxy instance as your limited sudo user, and complete the steps below. + +1. Select your distribution, and use the command to install HAProxy: + + {{< tabs >}} + {{< tab "Ubuntu 24.04 LTS" >}} + Use `apt` to install HAProxy on an Ubuntu 24.04 LTS instance: + + ```command + sudo apt install haproxy + ``` + {{< /tab >}} + {{< tab "CentOS Stream 9" >}} + Use `dnf` to install HAProxy on a CentOS Stream 9 instance: + + ```command + sudo dnf install haproxy + ``` + {{< /tab >}} + {{< tab "openSUSE Leap 15.6" >}} + Use `zypper` to install HAProxy on a openSUSE Leap 15.6 instance: + + ```command + sudo zypper in haproxy + ``` + {{< /tab >}} + {{< /tabs >}} + +1. Verify the HAProxy installation by checking the installed version number: + + ```command + sudo haproxy -v + ``` + + {{< tabs >}} + {{< tab "Ubuntu 24.04 LTS" >}} + ```output + HAProxy version 2.8.5-1ubuntu3 2024/04/01 - https://haproxy.org/ + Status: long-term supported branch - will stop receiving fixes around Q2 2028. + Known bugs: http://www.haproxy.org/bugs/bugs-2.8.5.html + Running on: Linux 6.8.0-44-generic #44-Ubuntu SMP PREEMPT_DYNAMIC Tue Aug 13 13:35:26 UTC 2024 x86_64 + ``` + {{< /tab >}} + {{< tab "CentOS Stream 9" >}} + ```output + HAProxy version 2.4.22-f8e3218 2023/02/14 - https://haproxy.org/ + Status: long-term supported branch - will stop receiving fixes around Q2 2026. + Known bugs: http://www.haproxy.org/bugs/bugs-2.4.22.html + Running on: Linux 5.14.0-496.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Aug 12 20:37:54 UTC 2024 x86_64 + ``` + {{< /tab >}} + {{< tab "openSUSE Leap 15.6" >}} + ```output + HAProxy version 2.8.6 2024/02/15 - https://haproxy.org/ + Status: long-term supported branch - will stop receiving fixes around Q2 2028. + Known bugs: http://www.haproxy.org/bugs/bugs-2.8.6.html + Running on: Linux 6.4.0-150600.23.17-default #1 SMP PREEMPT_DYNAMIC Tue Jul 30 06:37:32 UTC 2024 (9c450d7) x86_64 + ``` + {{< /tab >}} + {{< /tabs >}} + +1. Use `systemctl` to start HAProxy: + + ```command + sudo systemctl start haproxy + ``` + +1. Use `systemctl` to configure HAProxy to automatically start after a reboot: + + ```command + sudo systemctl enable haproxy + ``` + +1. Verify HAProxy is `active (running)`: + + ```command + systemctl status haproxy + ``` + + ```output + ● haproxy.service - HAProxy Load Balancer + Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; preset: enabled) + Active: active (running) since Tue 2024-09-17 20:37:22 UTC; 1 day 1h ago + Docs: man:haproxy(1) + file:/usr/share/doc/haproxy/configuration.txt.gz + Process: 46011 ExecReload=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS) + Process: 46014 ExecReload=/bin/kill -USR2 $MAINPID (code=exited, status=0/SUCCESS) + Main PID: 35012 (haproxy) + Status: "Ready." + Tasks: 2 (limit: 1068) + Memory: 40.6M (peak: 75.5M swap: 224.0K swap peak: 23.9M) + CPU: 37.675s + CGroup: /system.slice/haproxy.service + ├─35012 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock + └─46018 /usr/sbin/haproxy -sf 45988 -x sockpair@5 -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock + ``` + +## The HAProxy Configuration File + +HAProxy is controlled through its configuration file and the CLI. The default HAProxy configuration file is created at `/etc/haproxy/haproxy.cfg` during installation, and contains the settings needed to perform network balancing and flow control. It can be edited with any command line-based text editor. + +To edit and use the TCP load balancing and health check functions in this guide, open the HAProxy configuration file with the text editor of your choice: + +```command +sudo nano /etc/haproxy/haproxy.cfg +``` + +## TCP Load Balancing + +Load balancing is defined in two sections of the HAProxy configuration file: `frontend` and `backend`. Below are example `frontend` and `backend` configurations for TCP load balancing: + + +### Frontend Configuration + +```file {title="/etc/haproxy/haproxy.cfg"} +frontend web-test + bind *:80 + mode tcp + default_backend web-test +``` + +- `frontend` declares that this section is for a frontend configuration called `web-test`. +- `bind` specifies the interface and port that HAProxy listens to for incoming connections. Here, `*:80` means that HAProxy listens on all available IP addresses (`*`) on port `80`, which is the standard port for web traffic. +- `mode` is set to TCP, so that HAPRoxy handles traffic at the transport layer. +- `default_backend` directs this traffic to a backend named `web-test`, as defined in the next section. + +### Backend Configuration + +```file {title="/etc/haproxy/haproxy.cfg"} +backend web-test + mode tcp + balance roundrobin + server server1 {{< placeholder "backend1_VLAN_IP_ADDRESS" >}}:80 + server server2 {{< placeholder "backend2_VLAN_IP_ADDRESS" >}}:80 + server server3 {{< placeholder "backend3_VLAN_IP_ADDRESS" >}}:80 +``` + +- `backend` declares that this section is for a backend configuration called `web-test`. +- `mode` is again set to TCP, telling HAPRoxy to handle traffic at the transport layer. +- `balance` is set to the Round Robin method, which connects each client reaching the HAProxy server's IP address to the next server in the list. +- `server` statements define the backend servers using the VLAN addresses specified during the initial HAProxy setup. + +## TCP Health Checks + +HAProxy’s load balancing function can also select servers based on their health status. Health checks can be either active or passive. An active health check probes each backend server individually for specific health attributes, whereas a passive check relies on basic connection error information by protocol (Layer 4/TCP or Layer7/HTTP). + +To enable a basic server health check, include the `check` keyword in the `server` entry of your HAProxy configuration file: + +```file {title="/etc/haproxy/haproxy.cfg"} +server server1 {{< placeholder "backend1_VLAN_IP_ADDRESS" >}}:80 check +``` + +When the `check` keyword is included, HAProxy sends a SYN/ACK request to determine if a server is active. In some cases, servers may correctly respond to this type of query, while individual services and applications may still be down or unavailable. + +### Active TCP Health Checks + +Active health checks provide more sophisticated monitoring by sending application-specific queries to backend servers and expecting a valid response in return. + +To have HAProxy check server health at specified intervals, include the `inter` keyword along with an interval value. For example: + +```file {title="/etc/haproxy/haproxy.cfg"} +server server1 {{< placeholder "backend1_VLAN_IP_ADDRESS" >}}:80 check inter 4 +``` + +In this example, HAProxy checks the first server in the pool every four seconds. If the server does not respond as expected, it is marked as down. This process functions like a ping-style health check to verify server availability. + +### Passive TCP Health Checks + +HAProxy uses the TCP protocol to perform passive health checks on backend servers. With passive health checks, HAProxy monitors Layer 4 (TCP) traffic for errors and marks a server as down when a specified error limit is reached. + +Below is an example of the syntax used for a passive health check: + +```file {title="/etc/haproxy/haproxy.cfg"} +server backend1 {{< placeholder "backend1_VLAN_IP_ADDRESS" >}}:80 check observe layer4 error-limit 10 on-error mark-down +``` + +This configuration specifies a passive health check that observes TCP errors (`observe layer4`). If the number of errors reaches the specified limit of 10 (`error-limit 10`), the server is marked as down (`on-error mark-down`). To optimize performance and reliability, you can adjust the intervals and error limits for different servers based on their capacity, role, or complexity. For more information, refer to the [HAProxy documentation on active health checks](https://www.haproxy.com/documentation/hapee/1-8r1/load-balancing/health-checking/active-health-checks/). + +## Configure TCP Load Balancing with Health Checks + +Set the HAProxy configuration file to perform TCP load balancing with basic passive health checks. + +1. Open the HAProxy configuration file with the text editor of your choice: + + ```command + sudo nano /etc/haproxy/haproxy.cfg + ``` + +1. Append the following code to the end of the file, and save your changes: + + ```file {title="/etc/haproxy/haproxy.cfg"} + frontend web-test + bind *:80 + mode tcp + default_backend web-test + + backend web-test + mode tcp + balance roundrobin + server server1 {{< placeholder "backend1_VLAN_IP_ADDRESS" >}}:80 check + server server2 {{< placeholder "backend2_VLAN_IP_ADDRESS" >}}:80 check + server server3 {{< placeholder "backend3_VLAN_IP_ADDRESS" >}}:80 check + ``` + +1. Restart HAProxy to enable the changes made to the configuration file: + + ```command + sudo systemctl restart haproxy + ``` + + {{< note title="Check for syntax errors" >}} + If you encounter any errors after reloading HAProxy, run the following command to check for syntax errors in your `haproxy.cfg` file: + + ```command + sudo haproxy -c -f /etc/haproxy/haproxy.cfg + ``` + + An error message is returned if the configuration file has logical or syntax errors. When the check is complete, each error is listed one per line. + + This command only verifies the syntax and basic logic of the configuration, and it does not guarantee that the configuration works as intended when running. + {{< /note >}} + +### Test TCP Load Balancing + +Load balancing can be verified by visiting the HAProxy instances's public IP address. + +{{< note title="CentOS Stream 9" >}} +The default firewall settings for CentOS Stream 9 must be changed prior to testing. Run the following command to temporarily open port `80` to `tcp` traffic: + +```command +sudo firewall-cmd --add-port=80/tcp +``` + +Alternatively, use the commands below to configure the firewall to permanently allow `tcp` traffic on port `80`: + +```command +sudo firewall-cmd --permanent --add-port=80/tcp +sudo firewall-cmd --reload +``` +{{< /note >}} + +1. Open a web browser and navigate to the HAPRoxy instance's public IP address: + + ```command + http://{{< placeholder "HAProxy_PUBIC_IP_ADDRESS" >}} + ``` + + The WordPress web page for `backend1` should appear: + + ![The 2024 default WordPress homepage served from backend1.](2024-Default-WordPress-Homepage-backend1.png) + + {{< note title="Certificate warnings" >}} + If your browser warns of no HTTPS/TLS certificate, ignore the warning or use the advanced settings to reach the site. + {{< /note >}} + +1. Open another browser tab and enter the same HAProxy server IP address. This time, the default page for `backend2` should be displayed: + + ![The 2024 default WordPress homepage served from backend2.](2024-Default-WordPress-Homepage-backend2.png) + +1. Repeat this process in a third browser tab, and the `backend3` server's web page should appear: + + ![The 2024 default WordPress homepage served from backend3.](2024-Default-WordPress-Homepage-backend3.png) + +The HAProxy gateway is now successfully balancing traffic between the three backend instances using the Round Robin method. + +### Verify TCP Health Checks + +Health checks can be verified by removing one of the backend instances from the server pool. This should trigger a health check failure, causing HAProxy to exclude the unresponsive server from the backend pool. + +1. Open the Cloud Manager and choose **Linodes** + +1. Click on the ellipsis (**...**) to the right of your first backend instance, `backend1`. + +1. Choose **Power Off**, then click **Power Off Linode**. + +1. Reload the web browser tabs. HAProxy should no longer route traffic to `backend1`, effectively removing it from the pool. + +1. Return to the HAProxy instance and check the logs: + + ```command + sudo tail -f /var/log/haproxy.log + ``` + + Your output should contain a "WARNING" line regarding the "DOWN" status of `server1`: + + ```output + [WARNING] (4494) : Server web-test/server1 is DOWN, reason: Layer4 connection problem, info: "No route to host", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. + ``` + +This shows that HAProxy's TCP health checks are working as intended. \ No newline at end of file diff --git a/docs/guides/web-servers/caddy/how-to-install-and-configure-caddy-on-debian-10/index.md b/docs/guides/web-servers/caddy/how-to-install-and-configure-caddy-on-debian-10/index.md index cf02da4cc8c..06f6d6a68af 100644 --- a/docs/guides/web-servers/caddy/how-to-install-and-configure-caddy-on-debian-10/index.md +++ b/docs/guides/web-servers/caddy/how-to-install-and-configure-caddy-on-debian-10/index.md @@ -36,15 +36,17 @@ aliases: ['/web-servers/caddy/how-to-install-and-configure-caddy-on-debian-10/'] 1. Download `caddy`: - sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https - curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo tee /etc/apt/trusted.gpg.d/caddy-stable.asc + sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl + curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list 1. Install Caddy: + sudo apt update sudo apt install caddy 1. To verify the installation of caddy type: + caddy version An output similar to the following appears: