Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy only one celery instance #366

Merged
merged 10 commits into from
Mar 15, 2024
Merged

Conversation

arturo-seijas
Copy link
Collaborator

@arturo-seijas arturo-seijas commented Mar 13, 2024

Applicable spec: N/A

Overview

Run celery only in one unit. To achieve that, the celery container is removed and it will be planned instead on a single unit along with the indico process

Rationale

Celery does not provide exactly once guarantees, so multiple units can cause tasks to be executed by multiple workers, impacting user experience when those have a visible result.

Juju Events Changes

  • leader elected
  • peer relation departed

Module Changes

charm.py

Library Changes

N/A

Checklist

@canonical canonical deleted a comment from github-actions bot Mar 13, 2024
@canonical canonical deleted a comment from github-actions bot Mar 13, 2024
@canonical canonical deleted a comment from github-actions bot Mar 13, 2024
@arturo-seijas arturo-seijas marked this pull request as ready for review March 14, 2024 08:34
@arturo-seijas arturo-seijas requested a review from a team as a code owner March 14, 2024 08:34
yanksyoon
yanksyoon previously approved these changes Mar 14, 2024
Copy link
Contributor

@yanksyoon yanksyoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Just questions out of curiousity!

src/charm.py Show resolved Hide resolved
pyproject.toml Outdated Show resolved Hide resolved
Copy link
Contributor

@mthaddon mthaddon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm understanding this correctly we're relying on the peer-relation-departed event happening before leader-elected. If it doesn't, we'd remove the nominated unit but not nominate a new unit until another leader-elected event is triggered at some point in the future.

@arturo-seijas
Copy link
Collaborator Author

If I'm understanding this correctly we're relying on the peer-relation-departed event happening before leader-elected. If it doesn't, we'd remove the nominated unit but not nominate a new unit until another leader-elected event is triggered at some point in the future.

Exactly, there should be a leader at any point, so if the celery unit is leader, a leader elected event will be fired when it is removed (if not earlier)

@mthaddon
Copy link
Contributor

Exactly, there should be a leader at any point, so if the celery unit is leader, a leader elected event will be fired when it is removed (if not earlier)

But if the leader-elected event is fired before peer-relation-departed we'd run the following:

if peer_relation and not peer_relation.data[self.app].get("celery-unit"):

Since "celery-unit" is set, it would do nothing. Then the peer-relation-departed event fires, and we unset this value, and we'd continue with an unset value until the next leader-elected event is fired, at some unknown point in the future.

@arturo-seijas
Copy link
Collaborator Author

Exactly, there should be a leader at any point, so if the celery unit is leader, a leader elected event will be fired when it is removed (if not earlier)

But if the leader-elected event is fired before peer-relation-departed we'd run the following:

if peer_relation and not peer_relation.data[self.app].get("celery-unit"):

Since "celery-unit" is set, it would do nothing. Then the peer-relation-departed event fires, and we unset this value, and we'd continue with an unset value until the next leader-elected event is fired, at some unknown point in the future.

In that case, the change would occur in the peer relation departed handler. All the units receive that event, so if the celery unit is not the leader, the leader will set that value to itself

@mthaddon
Copy link
Contributor

In that case, the change would occur in the peer relation departed handler. All the units receive that event, so if the celery unit is not the leader, the leader will set that value to itself

Sorry, you are right. Thanks for the out of band explanation, I'd missed that the peer-relation-departed event is handling things different if the "celery-unit" value is set to the current leader or not exactly to take care of this possibility.

Copy link
Contributor

Test coverage for b1482f9

Name                       Stmts   Miss Branch BrPart  Cover   Missing
----------------------------------------------------------------------
src/charm.py                 325      9     84      8    96%   567->606, 652, 711-712, 794->810, 796->805, 805->810, 818-819, 858, 891->exit, 929-935
src/database_observer.py      33      0      4      0   100%
src/smtp_observer.py          16      0      0      0   100%
src/state.py                  44      0      4      0   100%
----------------------------------------------------------------------
TOTAL                        418      9     92      8    97%

Static code analysis report

Run started:2024-03-14 11:21:42.561937

Test results:
  No issues identified.

Code scanned:
  Total lines of code: 2457
  Total lines skipped (#nosec): 6
  Total potential issues skipped due to specifically being disabled (e.g., #nosec BXXX): 0

Run metrics:
  Total issues (by severity):
  	Undefined: 0
  	Low: 0
  	Medium: 0
  	High: 0
  Total issues (by confidence):
  	Undefined: 0
  	Low: 0
  	Medium: 0
  	High: 0
Files skipped (0):

@arturo-seijas arturo-seijas merged commit 599624b into main Mar 15, 2024
22 checks passed
@arturo-seijas arturo-seijas deleted the run-celery-only-in-one-unit branch March 15, 2024 11:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants