Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: put dataprocessor check under namespace #53

Merged
merged 13 commits into from
Nov 3, 2023

Conversation

Elsie4ever
Copy link
Contributor

@Elsie4ever Elsie4ever commented Oct 25, 2023

Issue: The original Alice-Spring quick start deployment template uses a different namespace for the custom location resource. This discrepancy causes Bluefin resources such as a pod in the original namespace and the rest of the Bluefin related resources in the custom location namespace. Consequently, the check fails because not all of the resources are in the same namespace.

This pr updated the bluefin check under namespaces to fix the issue:

  1. check pod across all namespaces
  2. list instances, pipelines and datasets under namespace
  3. update bluefin to dataprocessor for check
  4. update unit test

@Elsie4ever Elsie4ever requested a review from digimaun as a code owner October 25, 2023 22:46
@Elsie4ever Elsie4ever requested review from c-ryan-k and vilit1 October 26, 2023 17:25
@Elsie4ever Elsie4ever changed the title refactor: put bluefin check under namespace refactor: put dataprocessor check under namespace Nov 1, 2023
Copy link
Contributor

@vilit1 vilit1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - assuming the people won't double back with the acronym

Comment on lines +131 to +135
check_manager.add_target_eval(
target_name=target_instances,
status=CheckTaskStatus.success.value,
value={"instances": len(all_instances)}
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this target (number of instances) be per-namespace?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually have a question for all resource eval function:
when we enter
if not all_<resource>:

we will add a target eval for error. therefore I'm thinking in this case should we also have a matching of add target eval of success when all_is defined? that's the reason I added this

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The target eval for no resources actually is skipped for some resources (optional connectors like mqtt and data lake) and error for others (required resources like mq broker etc) - but those are added if there's no resource since... otherwise there's no evals or conditions :)

I guess if it's for "all_" checks (target: at least one resource in any namespace) then it makes sense to have it in the _all_ target.

Copy link
Member

@c-ryan-k c-ryan-k Nov 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there's per-namespace instance count target_status set based on instance count down on line 156, so that's failing a data processor -> [namespace] target, but there really isn't an eval or a condition that says we need one resource in every namespace.

So right this check has target conditions / evals that look like:

  • all
    • conditions: ["len(instances)==1", "provisioningStatus"] (which, status should probably be per-namespace, len is fine here)
    • eval: at least one dataprocessor instance in all namespaces
  • namespace 1
    • conditions: (same conditions as _all_, len and provisioningstatus)
    • no eval, but error status if conditions not met

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to finalize it in this PR, but let's make a point to come back to this discussion and make sure all our checks are consistent across DP, LNM, and MQ

Copy link
Member

@c-ryan-k c-ryan-k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have a few open questions that we should finalize at some point, but they're not blockers for this PR:

@Elsie4ever Elsie4ever merged commit ff307b1 into Azure:dev Nov 3, 2023
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants