Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial prowjob informer framework #8991

Merged
merged 1 commit into from
Aug 22, 2018
Merged

Conversation

krzyzacy
Copy link
Member

@krzyzacy krzyzacy commented Aug 9, 2018

I've referenced this from https://github.com/trstringer/k8s-controller-custom-resource and gonna play around with this and fill in more logics.

Open the PR early to see if this is the correct way of implementing an informer, and gonna try to make it work with the gerrit cluster first :-)

/area prow
/assign
cc @stevekuznetsov @cjwagner
also cc potential users wants report feature in gerrit: @yutongz @AishSundar

@k8s-ci-robot k8s-ci-robot added area/prow Issues or PRs related to prow do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Aug 9, 2018
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 9, 2018
@k8s-ci-robot k8s-ci-robot added the ¯\_(ツ)_/¯ ¯\\\_(ツ)_/¯ label Aug 9, 2018
shouldSkipRetest: false,
err: testError,
action: github.PullRequestActionOpened,
shouldSkipRetest: false,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/shrug
hack/update-gofmt doesn't match verify-gofmt anymore ?!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm, was using my workstation go rather than test-infra go

@stevekuznetsov
Copy link
Contributor

If you need another example of a controller using CRD informers: https://github.com/openshift/ci-vm-operator/

// retrieve our custom resource informer which was generated from
// the code generator and pass it the custom resource client, specifying
// we should be looking through all namespaces for listing and watching
informer := prowjobinformer.NewProwJobInformer(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should use factories for the informers so they can share caches

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure

queue.Add(key)
}
},
DeleteFunc: func(obj interface{}) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can retrieve state from the tombstone if you need it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

crier don't care when sinker deletes prowjobs

// processNextItem retrieves each queued item and takes the
// necessary handler action based off of if the item was
// created or deleted
func (c *Controller) processNextItem() bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a lot more complex than what @fejta and I have seen... where is it from? Why?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:-) I basically copied the controller code from https://github.com/trstringer/k8s-controller-custom-resource/blob/master/controller.go#L72, I'm sure I can get rid of something

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

)

// Handler interface contains the methods that are required
type Handler interface {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be writing a level-driven system for a controller. We look at current state of the ProwJob.Status and the current state of the report in GitHub, and reconcile as necessary. Do not try to generate an event-driven system since you are not ensured to see every event.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'll only handle ObjectUpdated and compare if it is a status change, and calls into reporter lib from there

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take your example as I don't want to handle add/delete

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait,

since you are not ensured to see every event

really? why? then this won't be useful?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from https://github.com/kubernetes/sample-controller/blob/master/docs/controller-client-go.md - I think missing events will be handled by relisting from the list-watcher, crier should only care about the state changes and the reporter lib should handle the underneath logic. I'll verify the behavior.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, if you miss events you'll see the object on your next relist. I just mean (and this is what you said you were about to do in scrum) that you should reconcile current state with current spec. Don't try to digest into "this happened" events, just compare desired and actual state and nudge in the convergent direction

@krzyzacy
Copy link
Member Author

krzyzacy commented Aug 9, 2018

/cc @munnerz
I accidentally found James actually wrote the sample.. cc to see if this is reasonable or can be simplified.

@krzyzacy
Copy link
Member Author

so far this code runs, but seems to relist all | a lot of prowjobs per cycle - can there be non-prow controllers that mangles prowjob? Or wonder if I can somehow filter on the status change only? (I guess I can do a comparison upon an object update sync?)

@BenTheElder
Copy link
Member

It should relist the world on some interval to diff against the cache, in case it missed events.

Copy link
Member

@munnerz munnerz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so far this code runs, but seems to relist all | a lot of prowjobs per cycle

So there will be an actual APIServer resync/re-list every resyncPeriod, however the resyncPeriod nowadays is not really necessary to catch 'missed' events.

A bit of history: resyncPeriod was originally added just in-case some events were missed when everything was switched to use local caches. There are however other mechanisms in place that allow informers to catch up with missed events, and so nowadays we needn't rely on resyncPeriod to ensure we've observed changes to resources.

The latest advice I've heard, is that the resyncPeriod is now largely only useful for syncing external information sources - i.e. if you are depending upon some external API that you can't 'watch' for changes, you can use the resyncPeriod to periodically poll/trigger a poll of these external sources despited the AddFunc/UpdateFunc/DeleteFunc not being called.

For that reason, you can just set resyncPeriod to zero (i.e. never force a resync), iff the only thing(s) that would lead to a change/resync of your resource being required are watched (i.e. via the informer api).

(I guess I can do a comparison upon an object update sync?)

You can do this, however IIRC it is not advised as your controller should not care about extra calls to Sync, because it should compare the 'actual' state of the world with the ProwJob and determine whether action needs to be taken (so if thing you care about has not changed, very little action should be taken).

Hope that helps! I'm not too familiar with the background on what this controller will do, so I may be a bit off base with some comments 😬

// getKubernetesClient retrieves the Kubernetes cluster
// client from within the cluster
func getKubernetesClient() (kubernetes.Interface, prowjobclientset.Interface) {
config, err := loadClusterConfig()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clientcmd.BuildFromFlags is the 'standard' for constructing configuration for CLI apps - may be best to use that from the get-go 😄 https://github.com/kubernetes/sample-controller/blob/master/main.go#L46

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eh, do I still want to pass in kubeconfig and masterurl if it's already in cluster?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, in-cluster load should take care of it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done - do we want to put getKubernetesClient under prow/kube?

// create a new queue so that when the informer gets a resource that is either
// a result of listing or watching, we can add an idenfitying key to the queue
// so that it can be handled in the handler
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default rate limiter, imo, is way over-optimistic. You can see the configuration for it here:
https://github.com/kubernetes/client-go/blob/master/util/workqueue/default_rate_limiters.go#L37-L45.

Notably, it is exponential with a base value of 5ms, and a max wait period of 1s. In my experience this is way over the top, and a base value of ~1s and a (reasonable for your resource) lower max is usually more effective/reasonable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done I think, changed to 1s/60s for now and maybe gonna make it configurable

defer close(stopCh)

// run the controller loop to process items
go controller.Run(stopCh)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling Start(stopCh) on the SharedInformerFactory before starting the actual controller is good-practice.

It will call the HasSynced function for all Informers that have been obtained via the factory automatically.

Whether you choose to then also keep the HasSynced function call on the Controller as well isn't really an issue (HasSynced can be called >1 time). But it at least helps to ensure you've properly called the HasSynced funcs for all relevant informers if you do 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

sigTerm := make(chan os.Signal, 1)
signal.Notify(sigTerm, syscall.SIGTERM)
signal.Notify(sigTerm, syscall.SIGINT)
<-sigTerm
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: With the current termination logic, if an item is half-way through being processed when the SIGTERM comes in, it will be aborted immediately (as the process will exit). Is this desired? If not, you will need to plumb some logic in to wait for each controller worker to exit before returning.

You can see an example of that here: https://github.com/munnerz/cert-manager/blob/2ca7d26db2d53ebbb0e5996d308b839f4137497c/pkg/controller/certificates/controller.go#L144-L154

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll want to wait until all handling is complete before terminating the process so that we don't fail to update a job status.

AddFunc: func(obj interface{}) {
key, _ := cache.MetaNamespaceKeyFunc(obj)
logrus.Infof("[NOOP] Add prowjob: %s", key)
},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused why you don't add to the queue on adds?

When a controller first starts up, the AddFunc will be called for each item in the apiserver. It should not matter whether Add or Update is called, as our controller is level based, meaning it should not matter whether the item is added or updated - we simply have to converge the actual state with the desired state (i.e. the spec).

Treating the AddFunc as a NoOp is a hint to me that something in the sync func isn't designed quite right, and may lead to weird behaviour further down the line.

My 2c on ResourceEventHandlerFuncs:

AddFunc and UpdateFunc would be better to be condensed into one and named "SomethingMightHaveChangedFunc".

DeleteFunc probably shouldn't exist either, as it's quite possible you miss a delete action (i.e. what if we upgrade our controller whilst a ProwJob is being deleted? What if our controller is {down-for-some-reason}). Instead, finalizers etc should be used if you have to do something on delete.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

logrus.Info("Controller.Run: cache sync complete")

// run the runWorker method every second with a stop channel
wait.Until(c.runWorker, time.Second, stopCh)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: just one worker? Is this desired? 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:-) do we want to make this configurable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

// then we want to retry this particular queue key a certain
// number of times (5 here) before we forget the queue key
// and throw an error
item, exists, err := c.informer.Informer().GetIndexer().GetByKey(keyRaw)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've not seen GetByKey used in this context here, although it doesn't sound unreasonable.

FWIW, I usually just query the lister and then do a check for IsNotFound: https://github.com/munnerz/cert-manager/blob/2ca7d26db2d53ebbb0e5996d308b839f4137497c/pkg/controller/certificates/controller.go#L203-L212

logrus.Errorf("Controller.processNextItem: Failed processing item with key %s with error %v, no more retries", key, err)
c.queue.Forget(key)
utilruntime.HandleError(err)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing returns here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

logrus.Infof("Controller.processNextItem: object created & updated detected: %s", keyRaw)
logrus.Infof("Will report to provider here, item : %v", item)
// TODO(krzyzacy): call report lib here
c.queue.Forget(key)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forget in both of these braces can be factored out of the if block, as it's called regardless 😄

} else {
logrus.Infof("Controller.processNextItem: object created & updated detected: %s", keyRaw)
logrus.Infof("Will report to provider here, item : %v", item)
// TODO(krzyzacy): call report lib here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took me a while to realise there is no Sync function implemented as part of this PR 😅 I guess that's a follow-up to make this easier to review 😄

@krzyzacy
Copy link
Member Author

thanks @munnerz ! So a little bit of background: currently plank handles prowjob status changes and reports to github, and we want to move that piece of logic out of plank, and we can make report more generic, aka, to gerrit, or other git platforms as well.

I'll attach the original design doc as well: https://docs.google.com/document/d/1xqxPB15DmdUmXv__6A186LMS126PYav41ImyMv9tWHE/edit

@stevekuznetsov
Copy link
Contributor

Stellar review @munnerz

// TODO(krzyzacy): move this to kube???
// getKubernetesClient retrieves the Kubernetes cluster
// client from within the cluster
func getKubernetesClient() (kubernetes.Interface, prowjobclientset.Interface) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll also need to ensure you are looking for ProwJobs in the right namespace:

// ProwJobNamespace is the namespace in the cluster that prow
// components will use for looking up ProwJobs. The namespace
// needs to exist and will not be created by prow.
// Defaults to "default".
ProwJobNamespace string `json:"prowjob_namespace,omitempty"`

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the pjClient can handle prowjob in all namespaces, and the lister can get prowjobs by taking in the namespace

c.informer.Lister().ProwJobs(namespace).Get(name)

sigTerm := make(chan os.Signal, 1)
signal.Notify(sigTerm, syscall.SIGTERM)
signal.Notify(sigTerm, syscall.SIGINT)
<-sigTerm
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll want to wait until all handling is complete before terminating the process so that we don't fail to update a job status.

@cjwagner
Copy link
Member

I know this PR doesn't include reporting logic, but something we might want to think about now...

When this controller does a full resync over all existing ProwJobs like it does when starting up, we need to avoid using API tokens to list comments or set statuses on all PRs that have associated PJs. We should consider marking the PJ itself to indicate when reporting is up-to-date. Storing the PJ state that was last successfully reported in a CRD field or annotation could work.

@krzyzacy
Copy link
Member Author

@cjwagner sounds like what I'm doing for gerrit? xD

@munnerz
Copy link
Member

munnerz commented Aug 17, 2018

We should consider marking the PJ itself to indicate when reporting is up-to-date. Storing the PJ state that was last successfully reported in a CRD field or annotation could work.

+1 to this - I am currently fire-fighting with Let's Encrypt API usage because the current iteration of cert-manager does not cache/store as much info as it should locally in the resources status block.

Design your controller to do this from the start and avoid massive pain 😄

@krzyzacy krzyzacy force-pushed the informer branch 3 times, most recently from 506607b to 58b377c Compare August 17, 2018 18:59
// NeedReport marks if a prowjob needs to be reported upon
// finished, so crier won't waste github api tokens to do
// unnecessary queries.
NeedReport bool `json:"need_report, omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to report on a ProwJob at multiple points (when it switches from triggered to pending and when it switches from pending to complete) so it would be better to keep track of the last PJ state that we successfully reported instead of using a bool. If we use a bool we can't distinguish if NeedReport is false because we reported the triggered->pending change or because we reported the pending->complete change.

It is unclear to me if this should be a field in the PJ status or if it should be an annotation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to report on a ProwJob at multiple points

ack 💯

It is unclear to me if this should be a field in the PJ status or if it should be an annotation.

I'd like to make it first class, so we don't need to loop annotations and find the key for each prowjob

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found there's already pj.Spec.Report, I made it pj.Status.PrevReportState instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one issue is we don't have a state for previous created prowjobs, and it will attempt to report all of them at the initial reployment

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pj.Spec.Report is not an indicator of whether a PJ has been reported or not. It is an indicator of whether the PJ reports at all (determined by value of skip_report). We can't just get rid of that field, pj.Status.PrevReportState should be an addition instead of a replacement.

With respect to handling previously created prowjobs, we could:

  • Write and run a script to update existing jobs.
  • Temporarily ignore reporting prowjobs created before a configurable time and set that time to be just before we deploy these changes.
  • Make Prow start marking the PrevReportState field now and wait a while before we merge this PR. (👎thats slow and doesn't let you test things now)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got rid of my previous pj.Status.NeedReport, not pj.Spec.Report :-)

I think we can deploy this NO-OP crier, and it will start to mark things as reported but not doing anything, and later on we can enable reportlib?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that sounds like a good plan.

@krzyzacy
Copy link
Member Author

/retest

@krzyzacy krzyzacy force-pushed the informer branch 4 times, most recently from 9160d12 to 8e20118 Compare August 21, 2018 22:12
@krzyzacy krzyzacy changed the title [WIP] initial prowjob informer framework Initial prowjob informer framework Aug 21, 2018
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 21, 2018
@krzyzacy
Copy link
Member Author

Poked around a little bit, this is WAI now, besides the issue I mentioned above that the 1st deployment will try to report old jobs, unless we guard on some non-empty default value?

Also not sure the best way of testing this - I probably can write an integration test like https://github.com/kubernetes/sample-controller/blob/master/controller_test.go?

// getKubernetesClient retrieves the Kubernetes cluster
// client from within the cluster
func getKubernetesClient() (kubernetes.Interface, prowjobclientset.Interface) {
config, err := loadClusterConfig()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, in-cluster load should take care of it

&workqueue.BucketRateLimiter{Limiter: rate.NewLimiter(rate.Limit(10), 100)},
))

// construct the Controller object which has all of the necessary components to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure we really need these comments? Are they from the sample code?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleaned up the comments from sample code, also cleaned up logs

// have completed existing items then shutdown
defer c.queue.ShutDown()

logrus.Info("Controller.Run: initiating")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make this more human? Without specific method names, etc

c.informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
logrus.Infof("Add prowjob: %s", key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets use a WithField on the key here for filtering

c.queue.Add(key)
}
},
DeleteFunc: func(obj interface{}) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why bother?

UpdateFunc: func(oldObj, newObj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(newObj)
logrus.Infof("Update prowjob: %s", key)
if err == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Log the error?


// runWorker executes the loop to process new items added to the queue
func (c *Controller) runWorker() {
logrus.Info("Controller.runWorker: starting")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These seem a little superfluous and will spam very heavily at Info -- might want to just remove all of the alerts that aren't embedding data

}

if c.queue.NumRequeues(key) < 5 {
logrus.Errorf("Controller.processNextItem: Failed processing item with key %s with error %v, retrying", key, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use WithField on the key and WithError on the error so we can filter these?

return true
}

logrus.Infof("Controller.processNextItem: object created & updated detected: %s", pj.Spec.Job)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically just WithFields(fields(pj)).Info("reconciling ProwJob")? Why we're processing is an implementation detail

@krzyzacy krzyzacy force-pushed the informer branch 5 times, most recently from 298bc91 to 77580bc Compare August 22, 2018 00:29
Copy link
Contributor

@stevekuznetsov stevekuznetsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/hold

)

const (
resync = 0 * time.Second
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed this, not even sure what 0 will do but this should be on the order of minutes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 means it will not do resync, per #8991 (review)

I can change the unit to minute


func (o *options) Validate() error {
if o.numWorkers == 0 {
return errors.New("--num-workers must be greater than 0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set a sane default

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default to 1 now

return clusterConfig, nil
}

// TODO(krzyzacy): move this to kube???
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm about to rip out the old ProwJob client and when I do that, I will add this to the client utils for startup

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok I'll punt to you :-)

@k8s-ci-robot k8s-ci-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm "Looks good to me", indicates that a PR is ready to be merged. labels Aug 22, 2018
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 22, 2018
@stevekuznetsov
Copy link
Contributor

/lgtm
/hold cancel

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Aug 22, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: krzyzacy, stevekuznetsov

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/prow Issues or PRs related to prow cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. ¯\_(ツ)_/¯ ¯\\\_(ツ)_/¯
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants