-
-
Notifications
You must be signed in to change notification settings - Fork 499
High Availability Helpy
This article describes architectural and configuration changes required to move from a single-node monolithic Helpy instance, to a more resilient, reliable and scalable architecture. These changes apply to the Helpy open source core or the on-premise cloud edition.
The changes outlined below focus on the “single-concern” principle, and seek to separate the duties of the application from the worker, database and file-store. These changes are required for running in a clustered or HA configuration, and even if you only run one application instance, will significantly improve reliability.
These changes are required for running in the following scenarios:
- AWS Elastic Beanstalk
- Heroku (although the DB node is handled for you automatically)
- Kubernetes
- Clustered architecture
The Helpy core ships with an “in-memory” background worker. This works great for non-mission critical and simple implementations, but is not resilient to server restarts (meaning you will lose any queued jobs) when you restart the server. Furthermore, large jobs could consume enough memory that other services (database, web server) could suffer.
A more resilient option is to upgrade to Sidekiq, something which can be done with the following changes:
- Install Redis. Sidekiq requires Redis to persist jobs.
- Add the Sidekiq gem to your gemfile and bundle install.
- Update /config/environments/production.rb to use Sidekiq backend
- Add configuration files and start Sidekiq on your server(s).
- Add monit to restart the process if it stops.
If you run a clustered implementation, you can run multiple worker servers with replication between redis instances.
The Helpy core stores all uploads and attachments in the application by default. This means if the app server is compromised or goes away, your attachment files and images will go away too. Further, your app server will run out of disk space at some point.
Using an external file store like AWS S3 is much prefered and provides instant durability by automatically backing up your files to the cloud. Starting with version 2.3, Helpy has built in support for configuring an external file store with environment variables. Any S3 compatible file storage support can be used by setting the following environment vars.
REMOTE_STORAGE true;
S3_KEY change_key;
S3_SECRET change_secret;
S3_REGION change_region;
S3_ENDPOINT change_endpoint;
S3_BUCKET_NAME change_bucket_name;
Running a separate database node will free your app server to focus on serving requests, and moving to dbaas such as AWS RDS or Digital Ocean’s new Postgres service will add instant reliability. Furthermore, separating the database makes adding new app server or worker instances trivial.
To further improve resilience, consider clustering your database with at least one slave configured as a hot standby, in case of failure of the primary node.
Updating the database configuration in Helpy requires updating your database.yml to look like the following:
production:
<<: *default
host: <%= ENV['POSTGRES_HOST'] %>
port: <%= ENV['POSTGRES_PORT'] %>
database: <%= ENV['POSTGRES_DB'] %>
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
This allows you to point your app servers at a remote postgres server with the POSTGRES_HOST environment variable.
Once you have made all the changes above (or if you are deploying to AWS Elastic Beanstalk) you will be able to run multiple application instances in a High Availability (HA) cluster. Running multiple instances of the Helpy App server means you need a way to distribute traffic between them. There are many ways to do this such as AWS Elastic Load Balancer, DO Load Balancers, HA-Proxy, and others.