Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple queue guide uses kubectl --export, which is deprecated #243

Closed
1 of 3 tasks
kevin-lindsay-1 opened this issue Oct 24, 2020 · 7 comments
Closed
1 of 3 tasks

Comments

@kevin-lindsay-1
Copy link

What would you like to let us know?

What's wrong?

  • I found a typo
  • An update required to an old post
  • I'm stuck with a tutorial

Let us know more below:

https://docs.openfaas.com/reference/async/#multiple-queues

  1. Create a queue-worker for the new queue name
    The example command shows kubectl get --export, which is deprecated.

A good alternative way to easily create new queue workers seems to be necessary now.

Do you want to chat to us about this issue?

Head over to Slack https://slack.openfaas.io/

@alexellis
Copy link
Member

"Deprecated" in which version? :)

@alexellis alexellis transferred this issue from openfaas/openfaas.github.io Oct 24, 2020
@alexellis
Copy link
Member

I'll move this to the correct repo, and feel free to send a PR, but bear in mind that not everyone is using K8s 1.19 in production yet.

@kevin-lindsay-1
Copy link
Author

Yeah I'm thinking of the best approach, because it's less than ideal to grab yaml out of stuff that's running in kube like that, ideally I'd say that multiple queues should be configured via helm for faas-netes, but I'm not sure what that contract would look like for the other non-kube implementations.

As an aside, do you know of a good way to monitor individual queues, and what they're waiting on, etc? Tailing the logs is less than ideal, especially if you have multiple workers/multiple in-flight. Perhaps a graph might be able to handle that, but depending on the ideal featureset, this might be a decent bespoke UI which hits certain metrics to give you insight as to what's blocking a queue (spitballing here).

@alexellis
Copy link
Member

What you need to do is to run something like --dry-run client -o yaml - I covered this in my webinar on K8s 1.19 a couple of weeks ago.

PRs to the docs are welcome, but please read the contributing guide and get your DCO right the first time if possible.

Open to suggestions on multiple queues, this is slightly less important now that we can have massive concurrency - 100+ go routines all processing from the same single Pod. The export / apply approach to YAML is a temporary work-around.

@alexellis
Copy link
Member

@kevin-lindsay-1 would you be open to sending a PR?

@kevin-lindsay-1
Copy link
Author

kevin-lindsay-1 commented Oct 29, 2020

@alexellis looked into this, there really isn't a fantastic alternative to --export for imperative duplication like this now, which I suppose is for the best. Realistically I agree with you that multiple queues shouldn't be necessary long-term, but there's a few missing components as it pertains to how single-queue works right now, mainly ackWait being defined by the queue-worker, and not the function itself, which seems to be one of the only things preventing the "one queue to rule them all" solution.

As an aside, is the max_inflight env var for of-watchdog defining a limit on the pod, or the overall function? I assume the former, but I don't see any docs making that explicit, and haven't had time to sit down and verify that specific detail.

Here's the script that I have right now, which cleans up the output yaml and creates a file. Please let me know if this works for you, and if so, I can make a PR.

export CORE_NS='openfaas'
export QUEUE_NAME='slow-queue'
kubectl get -n $CORE_NS deploy/queue-worker -o yaml \
  | yq delete - 'status' \
  | yq delete - 'metadata.annotations' \
  | yq delete - 'metadata.generation' \
  | yq delete - 'metadata.annotations' \
  | yq delete - 'metadata.managedFields' \
  | yq delete - 'metadata.namespace' \
  | yq delete - 'metadata.labels."app.kubernetes.io/managed-by"' \
  | yq delete - 'metadata.labels.chart' \
  | yq delete - 'metadata.labels.heritage' \
  | yq delete - 'metadata.labels.release' \
  | yq delete - '**.metadata.managedFields' \
  | yq delete - 'metadata.uid' \
  | yq delete - '**.metadata.uid' \
  | yq delete - 'metadata.resourceVersion' \
  | yq delete - '**.metadata.resourceVersion' \
  | yq delete - 'metadata.selfLink' \
  | yq delete - '**.metadata.selfLink' \
  | yq delete - 'metadata.creationTimestamp' \
  | yq delete - '**.metadata.creationTimestamp' \
  | yq write - 'spec.selector.matchLabels.app' $QUEUE_NAME-queue-worker \
  | yq write - 'spec.template.metadata.labels.app' $QUEUE_NAME-queue-worker \
  | yq write - 'metadata.name' $QUEUE_NAME-queue-worker \
  | yq write - 'spec.template.spec.containers[0].name' $QUEUE_NAME-queue-worker \
  | yq write - 'spec.template.spec.containers[0].env.(name==faas_nats_channel).value' $QUEUE_NAME \
  > $QUEUE_NAME-queue-worker.yaml

@kevin-lindsay-1
Copy link
Author

Closing this issue in favor of openfaas/nats-queue-worker#113, which would prevent the need from imperatively fetching an existing queue worker and creating a new one.

For others that may see this issue and would like to create their own queue worker, instead of exporting a live queue worker, I recommend simply making your own chart by looking at the queue worker in the openfaas chart: https://github.com/openfaas/faas-netes/blob/master/chart/openfaas/templates/queueworker-dep.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants