Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

seperating redis and sentinel container from a single pod #17038

Open
abhishekGupta2205 opened this issue Jun 6, 2023 · 6 comments
Open

seperating redis and sentinel container from a single pod #17038

abhishekGupta2205 opened this issue Jun 6, 2023 · 6 comments
Labels
feature-request on-hold Issues or Pull Requests with this label will never be considered stale redis

Comments

@abhishekGupta2205
Copy link

Name and Version

bitnami/redis , latest

What is the problem this feature will solve?

in present , if i enable sentinel in my helm chart then 2 containers will be formed inside one single pod only . As i need minimum of 3 sentinel , so i need to spawn 3 pods , which needs more cpu and memory . INstead if we seperate the containers in differnt pods and can customise the no of redis servers and no of sentinel pods needed , more resources can be saved.

What is the feature you are proposing to solve the problem?

resources consumption will be reduced . if i need only 2 redis servers , one master and one replica , i need to create 3 pods for fulfilling sentinel quorum.

What alternatives have you considered?

No response

@github-actions github-actions bot added the triage Triage is needed label Jun 6, 2023
@github-actions github-actions bot added in-progress and removed triage Triage is needed labels Jun 7, 2023
@bitnami-bot bitnami-bot assigned andresbono and unassigned javsalgar Jun 7, 2023
@andresbono
Copy link
Contributor

I think your feature request makes sense. You are using this cluster topology: Master-Replicas with Sentinel
, right? Can you share the values of architecture and replicas you are using?

@abhishekGupta2205
Copy link
Author

@andresbono , currently my architecture is amd64 and my replica count is 3 as minimum i need 3 sentinels . yes , i am using master -slave with sentinel .

@andresbono
Copy link
Contributor

Redis was using IP addresses to be configured before useHostnames was implemented. This can be one of the reasons why sentinels were tied to the workload containers (master/replicas).

As you will need 3 sentinels for a robust deployment, I would suggest that you use replicaCount: 3 even though you only need 1 master and 1 replica. Basically what you are currently doing.

Nevertheless, we will try to review the chart as a whole looking for possible improvements, taking into account your feature request.

@github-actions
Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Jun 29, 2023
@andresbono andresbono added the on-hold Issues or Pull Requests with this label will never be considered stale label Jul 3, 2023
@andresbono andresbono removed the stale 15 days without activity label Jul 3, 2023
@troll-os
Copy link

Beside resource consumption, the pattern of having 3 sentinels decoupled might allow to ensure Quorum.

I want to migrate away from an operator that doesn't work well in cases of failovers

I'm currently testing failover on this chart with a replicaCount to 3, if I lose 2 sentinel nodes (in the actual coupled setup) the last failover never occurs, my application which relies on streams hangs forever or at best will hit a "Can't write again read-only replica" type of error.

Any updates on this ?

@ktzsolt
Copy link

ktzsolt commented Jan 23, 2025

Beside resource consumption, the pattern of having 3 sentinels decoupled might allow to ensure Quorum.

Yes, this is much needed, currently this can be achieved only with 3 pod replicas so 3 sentinels and 3 redis (1 master, 2 slave), however we don't need 2 slave for redis, 1 master and 1 slave is enough. It doesn't hurt, but it uses up resources.

I want to migrate away from an operator that doesn't work well in cases of failovers

We also just dropped the redis-operator (I think it is the only one that is currently maintained) that we were using because it was unreliable and started using this chart.

I'm currently testing failover on this chart with a replicaCount to 3, if I lose 2 sentinel nodes (in the actual coupled setup) the last failover never occurs, my application which relies on streams hangs forever or at best will hit a "Can't write again read-only replica" type of error.

What do you mean by that? If you loose 2 sentinels out of 3 then you loose the majority of the quorum so the sole sentinel can not decide which is the master or slave even if all the master and slave redis containers are working fine in the pods and just the sentinel containers are down in the pods for some reason, that is the expected I think. You need majority (50%< ) of votes in a quorum to be decisive, so 2 out of 3 or 3 out of 5 and so on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request on-hold Issues or Pull Requests with this label will never be considered stale redis
Projects
None yet
Development

No branches or pull requests

6 participants