-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale faucet #278
Comments
@mutantcornholio could you link to pipelines or some logs or provide log examples? is it when you deploy, and the app can't start, it is: 1. failing CI job and 2. still ends up with deployed app with broken code? |
The problem is that if you "scale" the faucet to 2 instances then there will be two processes listening to messages on Matrix and so drips will be produced twice. |
Yes, that obviously needs to be dealt with. |
Probably more inclined to make it the same way as @paritytech/opstooling WDYT? |
For this usecase, it sounds much more appropriate to use a queue such as RabbitMQ (just to throw a name). You'd need one (or more) listener that adds the "job" to the queue. If using several listener, you wanna make sure the use of a key allows preventing dups. With that it becomes much easier to have as many "worker" as you wish (ie k8s deploys) that will pick up the tasks and remove them from the queue once successfully done. If they fail, the entry remains in the queue and can be picked up by the next worker. |
The cost of splitting an instance the into master / worker wouldn't be worth it IMO. The goal is to have minimum redundancy to allow maintenance without downtime. If we go with splitting the instance, we will end up with four instances for every network, while we would do perfectly fine with two. We could still go with a job queue, but have both producer and consumer in the same instance. That would be basically the same as what I suggested, except now we'd get free stuff like retries, timeouts, etc. |
I'm a bit confused, why wasn't it a problem before? if yes, are there different ways to solve it rather than having 2 instances and introducing DB etc... ? that sounds like overkill to me for the problem of wrong configuration or something |
Feels like it always worked like that and nobody cared. I don't think that any deployment configuration can get around the problem of two instances listening to same matrix events, and duplicating drips as the result. |
I think we have been lucky so far. |
I also realised that faucet stores its drips in a local, non-persistent (!) sqlite. However, it's all simple stuff, isn't it? |
I can't find logs anymore unfortunately (it'd be great to save the snapshot in text format next time) |
Currently failed deployments lead to outages.
Let's have two instances on each, so failed deployments would lead to stuck deploys, not downtimes.
The text was updated successfully, but these errors were encountered: