Releases: autopilotpattern/consul
0.7.3r39
CHANGELOG:
- upgraded to ContainerPilot 3.0.0 (final) #39
This release is the first release with a new versioning scheme. Versioning is now (version of consul)r(most recent PR #). So for example, version 0.7.3r39 is Consul version 0.7.3, associated with PR #39
This image is available as autopilotpattern/consul:0.7.3r39 on the Docker Hub:
https://hub.docker.com/r/autopilotpattern/consul/
0.7.3r0.9
CHANGELOG
- updated to ContainerPilot 3.0.0-RC1
- updated to Consul 0.7.3
- testing improvements
Available as autopilotpattern/consul:0.7.3-r0.9 on the Docker Hub: https://hub.docker.com/r/autopilotpattern/consul/
0.7.2-r0.8
CHANGELOG
Available as autopilotpattern/consul:0.7.2-r0.8 on the Docker Hub: https://hub.docker.com/r/autopilotpattern/consul/
0.7.2-r0.7.2
CHANGELOG:
- Upgrade Consul to 0.7.2
- Upgrade ContainerPilot to 2.6.0
Available as autopilotpattern/consul:0.7.2-r0.7.2
on the Docker Hub: https://hub.docker.com/r/autopilotpattern/consul/
0.7-r0.7
0.6-r0.6
Autopilot Pattern Consul 0.4
Changelog
- Moved to autopilotpattern namespace (#9)
- Updated components: Alpine 3.3, Consul 0.6.4 and Containerbuddy 1.3.0 (by @ddunkin #10)
- Removed
glibc
because Consul 0.6 eliminates thecgo
dependency (by @ddunkin in #10) - NOT DONE Versioning the image tag specified in the
*-compose.yml
files
Tags
This release is available on the Docker Hub as 0.4
tag
Triton trusted Consul v0.3
Changelog
- Updated components: Consul 0.6.0 and Containerbuddy 0.0.5 alpha (by @rchrd in #7)
- Added Docker Hub shields (#4)
- NOT DONE Removed
glibc
because Consul 0.6 eliminates thecgo
dependency - NOT DONE Versioning the image tag specified in the
*-compose.yml
files
Tags
This release is available on the Docker Hub as 0.3
tag
Triton trusted Consul v0.2
Changelog
- Simplified the bootstrapping of Consul by using Containerbuddy health checks to join a cluster that's been primed with
-bootstrap-expect
(by @tgross in #3).
How it works
This demo first starts up a bootstrap node that starts the raft but expects 2 additional nodes before the raft is healthy. Once this node is up and its IP address is obtained, the rest of the nodes are started and joined to the bootstrap IP address (the value is passed in the BOOTSTRAP_HOST
environment variable).
If a raft instance fails, the data is preserved among the other instances and the overall availability of the service is preserved because any single instance can authoritatively answer for all instances. Applications that depend on the Consul service should re-try failed requests until they get a response.
Any new raft instances need to be started with a bootstrap IP address, but after the initial cluster is created, the BOOTSTRAP_HOST
IP address can be any host currently in the raft. This means there is no dependency on the first node after the cluster has been formed.
Usage
Please consult the README.md
for instructions on how to use this release.
v0.1
Triton trusted Consul v0.1
Consul in Docker, designed for availability and durability.
Prep your environment
- Get a Joyent account and add your SSH key.
- Install and the Docker Engine (including
docker
anddocker-compose
) on your laptop or other environment, along with the Joyent CloudAPI CLI tools (including thesmartdc
andjson
tools). - Configure your Docker CLI and Compose for use with Joyent:
curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh
./sdc-docker-setup.sh -k us-east-1.api.joyent.com <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE>
Start a trusted Consul raft
- Clone or download this repo
cd
into the cloned or downloaded directory- Execute
bash start.sh
to start everything up - The Consul dashboard should automatically open in your browser, or follow the links output by the
start.sh
script
Use this in your own composition
Detailed example to come....
How it works
This demo actually sets up two independent Consul services:
- A single-node instance used only for bootstrapping the raft
- A three-node instance that other applications can point to
A running raft has no dependency on the bootstrap instance. New raft instances do need to connect to the bootstrap instance to find the raft, creating a failure gap that is discussed below. If a raft instance fails, the data is preserved among the other instances and the overall availability of the service is preserved because any single instance can authoritatively answer for all instances. Applications that depend on the Consul service should re-try failed requests until they get a response.
Each raft instance will constantly re-register with the bootstrap instance. If the boostrap instance or its data is lost, a new bootstrap instance can be started and all existing raft instances will re-register with it. In a scenario where the bootstrap instance is unavailable, it will be impossible to start raft instances until the bootstrap instance has been restarted and at least one existing raft member has reregistered.
Triton-specific availability advantages
Some details about how Docker containers work on Triton have specific bearing on the durability and availability of this service:
- Docker containers are first-order objects on Triton. They run on bare metal, and their overall availability is similar or better than what you expect of a virtual machine in other environments.
- Docker containers on Triton preserve their IP and any data on disk when they reboot.
- Linked containers in Docker Compose on Triton are actually distributed across multiple unique physical nodes for maximum availability in the case of node failures.
Credit where it's due
This project builds on the fine examples set by Jeff Lindsay's (Glider Labs) Consul in Docker work. It also, obviously, wouldn't be possible without the outstanding work of the Hashicorp team that made consul.io.