-
-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[16.0][IMP] queue_job: remove dead jobs requeuer cron and automatically requeue dead jobs #716
base: 16.0
Are you sure you want to change the base?
Conversation
Hi @guewen, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks very promising!
A few thoughts:
- There is a Caveat comment in
runner.py
that needs updating. - This should handle the
enqueued
state too, which is the state when the runner has decided that a jobs needs to run but then/queue_job/runjob
controller has not set it started yet. This state normally exists for a very short time, but I have seen situations where the workers are overloaded and take time to accept/queue_job/runjob
requests then die, leaving jobs inenqueued
state forever. - Since there are two race conditions between
enqueued
andstarted
, and betweenstarted
and the time the job transaction actually starts with the lock, I wonder if we should not introduce a small elapsed time condition (10s?) inreset_dead_jobs
. May be based ondate_enqueued
.
b0f9f59
to
abd0e2f
Compare
Thank you for your suggestions. I have implemented the necessary corrections. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more minor comments.
queue_job/job.py
Outdated
if self.max_retries and self.retry >= self.max_retries: | ||
raise FailedJobError("Max. retries (%d) reached" % (self.max_retries)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why adding this? This sounds unrelated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check that max_retries
is greater than retry
was done only when a RetryableJobError
was raised during the execution of the job.
Since I am increasing the retry
when re-queuing a job that was killed, I need to do the check before peforming the job, not only when raising an exception.
Moreover, I think that it doesn't make sense to perform the check only when an exception has been raised 🤔
310847e
to
0102273
Compare
f98c856
to
8edf042
Compare
8edf042
to
fd96829
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for tackling this old issue, I like this elegant solution. It should be quite optimized also. Congrats for this work
@@ -1,17 +1,6 @@ | |||
<?xml version="1.0" encoding="utf-8" ?> | |||
<odoo> | |||
<data noupdate="1"> | |||
<record id="ir_cron_queue_job_garbage_collector" model="ir.cron"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this become
<delete id="ir_cron_queue_job_garbage_collector" model="ir.cron"/>
to clean up upgrades / avoid filling cron logs with errors since requeue_stuck_jobs method is gone?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have archived the cron in the pre-migration.py
. Therefore, there won't be any error.
I always prefer to archive (set active to false) rather than deleting.
@AnizR I think we can also remove the |
Yes, jobs that have not been started will be re-queued by my new mechanism. |
6f8f4b8
to
e0e9327
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for that.
@guewen Could you merge this one ? |
@guewen We're finally going to do a few more tests before we merge this. We'll keep you informed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review LGTM
I also did some tests with job exceeding the time_cpu_limit and it works as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing this problem.
I'm asking for more comments to give explicit intent.
And I have a question about: why all these joins instead of use id or uuid directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments
Thank you for all your valuable feedback! I appreciate the time you've taken to review this PR. |
@AnizR another feedback I got today is that the new |
f3c13cc
to
8646632
Compare
I think that I've addressed every remarks. Thank you all! I've managed to add some tests on my newly introduced mechanism. |
8646632
to
db89122
Compare
…eue jobs in timeout [IMP] queue_job: increment 'retry' when re-queuing job that have been killed
db89122
to
3e5645b
Compare
A model is better than a manually managed table as it will protect the table from deletion by database_cleanup.
@AnizR I realized that the AbstractModel technique to create the lock table would not prevent it to be purged by |
You can add the exception in the |
I did a small test to simulate re-queuing of 'dead jobs'. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments added as I requested, thanks @AnizR
We'll do tests today |
Backport of enhancement from Odoo 16 PR on OCA: OCA#716
Goal
Automatically re-queue jobs that have been started but whose worker have been killed.
And rely on Odoo's limit_time_cpu and limit_time_real for job execution.
Technical explanation
Everything relies on a new table
queue_job_locks
which contains ids of jobs that have been started.When a job is executed, its row
queue_job_locks
is locked.If a row is in
queue_job_locks
with astate='started'
but not locked, it is either:Using this information, we can re-queue these jobs.
Why not lock directly in the `queue_job' table?
This was tried in #423 but it wasn't working when a job was raising an error.
It seems that the row was locked and it tried to write on the row to set as
failed
before committing.Improve current behavior
Re-queue jobs that have been killed but increment their 'retries' to avoid having a job that is always get killed in infinite re-queuing.