-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial prowjob informer framework #8991
Conversation
shouldSkipRetest: false, | ||
err: testError, | ||
action: github.PullRequestActionOpened, | ||
shouldSkipRetest: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/shrug
hack/update-gofmt doesn't match verify-gofmt anymore ?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm, was using my workstation go rather than test-infra go
If you need another example of a controller using CRD informers: https://github.com/openshift/ci-vm-operator/ |
prow/cmd/crier/main.go
Outdated
// retrieve our custom resource informer which was generated from | ||
// the code generator and pass it the custom resource client, specifying | ||
// we should be looking through all namespaces for listing and watching | ||
informer := prowjobinformer.NewProwJobInformer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should use factories for the informers so they can share caches
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure
prow/cmd/crier/main.go
Outdated
queue.Add(key) | ||
} | ||
}, | ||
DeleteFunc: func(obj interface{}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can retrieve state from the tombstone if you need it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
crier don't care when sinker deletes prowjobs
// processNextItem retrieves each queued item and takes the | ||
// necessary handler action based off of if the item was | ||
// created or deleted | ||
func (c *Controller) processNextItem() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a lot more complex than what @fejta and I have seen... where is it from? Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:-) I basically copied the controller code from https://github.com/trstringer/k8s-controller-custom-resource/blob/master/controller.go#L72, I'm sure I can get rid of something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prow/crier/handler.go
Outdated
) | ||
|
||
// Handler interface contains the methods that are required | ||
type Handler interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should be writing a level-driven system for a controller. We look at current state of the ProwJob.Status
and the current state of the report in GitHub, and reconcile as necessary. Do not try to generate an event-driven system since you are not ensured to see every event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'll only handle ObjectUpdated
and compare if it is a status change, and calls into reporter lib from there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take your example as I don't want to handle add/delete
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait,
since you are not ensured to see every event
really? why? then this won't be useful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from https://github.com/kubernetes/sample-controller/blob/master/docs/controller-client-go.md - I think missing events will be handled by relisting from the list-watcher, crier should only care about the state changes and the reporter lib should handle the underneath logic. I'll verify the behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, if you miss events you'll see the object on your next relist. I just mean (and this is what you said you were about to do in scrum) that you should reconcile current state with current spec. Don't try to digest into "this happened" events, just compare desired and actual state and nudge in the convergent direction
/cc @munnerz |
so far this code runs, but seems to relist all | a lot of prowjobs per cycle - can there be non-prow controllers that mangles prowjob? Or wonder if I can somehow filter on the status change only? (I guess I can do a comparison upon an object update sync?) |
It should relist the world on some interval to diff against the cache, in case it missed events. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so far this code runs, but seems to relist all | a lot of prowjobs per cycle
So there will be an actual APIServer resync/re-list every resyncPeriod
, however the resyncPeriod nowadays is not really necessary to catch 'missed' events.
A bit of history: resyncPeriod was originally added just in-case some events were missed when everything was switched to use local caches. There are however other mechanisms in place that allow informers to catch up with missed events, and so nowadays we needn't rely on resyncPeriod to ensure we've observed changes to resources.
The latest advice I've heard, is that the resyncPeriod is now largely only useful for syncing external information sources - i.e. if you are depending upon some external API that you can't 'watch' for changes, you can use the resyncPeriod to periodically poll/trigger a poll of these external sources despited the AddFunc/UpdateFunc/DeleteFunc not being called.
For that reason, you can just set resyncPeriod to zero (i.e. never force a resync), iff the only thing(s) that would lead to a change/resync of your resource being required are watched (i.e. via the informer api).
(I guess I can do a comparison upon an object update sync?)
You can do this, however IIRC it is not advised as your controller should not care about extra calls to Sync, because it should compare the 'actual' state of the world with the ProwJob and determine whether action needs to be taken (so if thing you care about has not changed, very little action should be taken).
Hope that helps! I'm not too familiar with the background on what this controller will do, so I may be a bit off base with some comments 😬
prow/cmd/crier/main.go
Outdated
// getKubernetesClient retrieves the Kubernetes cluster | ||
// client from within the cluster | ||
func getKubernetesClient() (kubernetes.Interface, prowjobclientset.Interface) { | ||
config, err := loadClusterConfig() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clientcmd.BuildFromFlags
is the 'standard' for constructing configuration for CLI apps - may be best to use that from the get-go 😄 https://github.com/kubernetes/sample-controller/blob/master/main.go#L46
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eh, do I still want to pass in kubeconfig and masterurl if it's already in cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, in-cluster load should take care of it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done - do we want to put getKubernetesClient
under prow/kube?
prow/cmd/crier/main.go
Outdated
// create a new queue so that when the informer gets a resource that is either | ||
// a result of listing or watching, we can add an idenfitying key to the queue | ||
// so that it can be handled in the handler | ||
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default rate limiter, imo, is way over-optimistic. You can see the configuration for it here:
https://github.com/kubernetes/client-go/blob/master/util/workqueue/default_rate_limiters.go#L37-L45.
Notably, it is exponential with a base value of 5ms, and a max wait period of 1s. In my experience this is way over the top, and a base value of ~1s and a (reasonable for your resource) lower max is usually more effective/reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done I think, changed to 1s/60s for now and maybe gonna make it configurable
prow/cmd/crier/main.go
Outdated
defer close(stopCh) | ||
|
||
// run the controller loop to process items | ||
go controller.Run(stopCh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling Start(stopCh)
on the SharedInformerFactory before starting the actual controller is good-practice.
It will call the HasSynced function for all Informers that have been obtained via the factory automatically.
Whether you choose to then also keep the HasSynced function call on the Controller as well isn't really an issue (HasSynced can be called >1 time). But it at least helps to ensure you've properly called the HasSynced funcs for all relevant informers if you do 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
prow/cmd/crier/main.go
Outdated
sigTerm := make(chan os.Signal, 1) | ||
signal.Notify(sigTerm, syscall.SIGTERM) | ||
signal.Notify(sigTerm, syscall.SIGINT) | ||
<-sigTerm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: With the current termination logic, if an item is half-way through being processed when the SIGTERM comes in, it will be aborted immediately (as the process will exit). Is this desired? If not, you will need to plumb some logic in to wait for each controller worker to exit before returning.
You can see an example of that here: https://github.com/munnerz/cert-manager/blob/2ca7d26db2d53ebbb0e5996d308b839f4137497c/pkg/controller/certificates/controller.go#L144-L154
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll want to wait until all handling is complete before terminating the process so that we don't fail to update a job status.
prow/crier/controller.go
Outdated
AddFunc: func(obj interface{}) { | ||
key, _ := cache.MetaNamespaceKeyFunc(obj) | ||
logrus.Infof("[NOOP] Add prowjob: %s", key) | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused why you don't add to the queue on adds?
When a controller first starts up, the AddFunc will be called for each item in the apiserver. It should not matter whether Add or Update is called, as our controller is level based, meaning it should not matter whether the item is added or updated - we simply have to converge the actual state with the desired state (i.e. the spec).
Treating the AddFunc as a NoOp is a hint to me that something in the sync func isn't designed quite right, and may lead to weird behaviour further down the line.
My 2c on ResourceEventHandlerFuncs:
AddFunc and UpdateFunc would be better to be condensed into one and named "SomethingMightHaveChangedFunc".
DeleteFunc probably shouldn't exist either, as it's quite possible you miss a delete action (i.e. what if we upgrade our controller whilst a ProwJob is being deleted? What if our controller is {down-for-some-reason}). Instead, finalizers etc should be used if you have to do something on delete.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
prow/crier/controller.go
Outdated
logrus.Info("Controller.Run: cache sync complete") | ||
|
||
// run the runWorker method every second with a stop channel | ||
wait.Until(c.runWorker, time.Second, stopCh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: just one worker? Is this desired? 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:-) do we want to make this configurable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
prow/crier/controller.go
Outdated
// then we want to retry this particular queue key a certain | ||
// number of times (5 here) before we forget the queue key | ||
// and throw an error | ||
item, exists, err := c.informer.Informer().GetIndexer().GetByKey(keyRaw) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've not seen GetByKey used in this context here, although it doesn't sound unreasonable.
FWIW, I usually just query the lister and then do a check for IsNotFound: https://github.com/munnerz/cert-manager/blob/2ca7d26db2d53ebbb0e5996d308b839f4137497c/pkg/controller/certificates/controller.go#L203-L212
prow/crier/controller.go
Outdated
logrus.Errorf("Controller.processNextItem: Failed processing item with key %s with error %v, no more retries", key, err) | ||
c.queue.Forget(key) | ||
utilruntime.HandleError(err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing returns here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
prow/crier/controller.go
Outdated
logrus.Infof("Controller.processNextItem: object created & updated detected: %s", keyRaw) | ||
logrus.Infof("Will report to provider here, item : %v", item) | ||
// TODO(krzyzacy): call report lib here | ||
c.queue.Forget(key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forget in both of these braces can be factored out of the if
block, as it's called regardless 😄
prow/crier/controller.go
Outdated
} else { | ||
logrus.Infof("Controller.processNextItem: object created & updated detected: %s", keyRaw) | ||
logrus.Infof("Will report to provider here, item : %v", item) | ||
// TODO(krzyzacy): call report lib here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took me a while to realise there is no Sync
function implemented as part of this PR 😅 I guess that's a follow-up to make this easier to review 😄
thanks @munnerz ! So a little bit of background: currently plank handles prowjob status changes and reports to github, and we want to move that piece of logic out of plank, and we can make report more generic, aka, to gerrit, or other git platforms as well. I'll attach the original design doc as well: https://docs.google.com/document/d/1xqxPB15DmdUmXv__6A186LMS126PYav41ImyMv9tWHE/edit |
Stellar review @munnerz |
prow/cmd/crier/main.go
Outdated
// TODO(krzyzacy): move this to kube??? | ||
// getKubernetesClient retrieves the Kubernetes cluster | ||
// client from within the cluster | ||
func getKubernetesClient() (kubernetes.Interface, prowjobclientset.Interface) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll also need to ensure you are looking for ProwJobs in the right namespace:
test-infra/prow/config/config.go
Lines 78 to 82 in fa113fe
// ProwJobNamespace is the namespace in the cluster that prow | |
// components will use for looking up ProwJobs. The namespace | |
// needs to exist and will not be created by prow. | |
// Defaults to "default". | |
ProwJobNamespace string `json:"prowjob_namespace,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pjClient can handle prowjob in all namespaces, and the lister can get prowjobs by taking in the namespace
c.informer.Lister().ProwJobs(namespace).Get(name)
prow/cmd/crier/main.go
Outdated
sigTerm := make(chan os.Signal, 1) | ||
signal.Notify(sigTerm, syscall.SIGTERM) | ||
signal.Notify(sigTerm, syscall.SIGINT) | ||
<-sigTerm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll want to wait until all handling is complete before terminating the process so that we don't fail to update a job status.
I know this PR doesn't include reporting logic, but something we might want to think about now... When this controller does a full resync over all existing ProwJobs like it does when starting up, we need to avoid using API tokens to list comments or set statuses on all PRs that have associated PJs. We should consider marking the PJ itself to indicate when reporting is up-to-date. Storing the PJ state that was last successfully reported in a CRD field or annotation could work. |
@cjwagner sounds like what I'm doing for gerrit? xD |
+1 to this - I am currently fire-fighting with Let's Encrypt API usage because the current iteration of cert-manager does not cache/store as much info as it should locally in the resources Design your controller to do this from the start and avoid massive pain 😄 |
506607b
to
58b377c
Compare
prow/apis/prowjobs/v1/types.go
Outdated
// NeedReport marks if a prowjob needs to be reported upon | ||
// finished, so crier won't waste github api tokens to do | ||
// unnecessary queries. | ||
NeedReport bool `json:"need_report, omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to report on a ProwJob at multiple points (when it switches from triggered to pending and when it switches from pending to complete) so it would be better to keep track of the last PJ state that we successfully reported instead of using a bool. If we use a bool we can't distinguish if NeedReport is false because we reported the triggered->pending change or because we reported the pending->complete change.
It is unclear to me if this should be a field in the PJ status or if it should be an annotation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to report on a ProwJob at multiple points
ack 💯
It is unclear to me if this should be a field in the PJ status or if it should be an annotation.
I'd like to make it first class, so we don't need to loop annotations and find the key for each prowjob
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found there's already pj.Spec.Report
, I made it pj.Status.PrevReportState
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one issue is we don't have a state for previous created prowjobs, and it will attempt to report all of them at the initial reployment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pj.Spec.Report
is not an indicator of whether a PJ has been reported or not. It is an indicator of whether the PJ reports at all (determined by value of skip_report
). We can't just get rid of that field, pj.Status.PrevReportState
should be an addition instead of a replacement.
With respect to handling previously created prowjobs, we could:
- Write and run a script to update existing jobs.
- Temporarily ignore reporting prowjobs created before a configurable time and set that time to be just before we deploy these changes.
- Make Prow start marking the
PrevReportState
field now and wait a while before we merge this PR. (👎thats slow and doesn't let you test things now)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got rid of my previous pj.Status.NeedReport
, not pj.Spec.Report
:-)
I think we can deploy this NO-OP crier, and it will start to mark things as reported but not doing anything, and later on we can enable reportlib?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that sounds like a good plan.
/retest |
9160d12
to
8e20118
Compare
Poked around a little bit, this is WAI now, besides the issue I mentioned above that the 1st deployment will try to report old jobs, unless we guard on some non-empty default value? Also not sure the best way of testing this - I probably can write an integration test like https://github.com/kubernetes/sample-controller/blob/master/controller_test.go? |
prow/cmd/crier/main.go
Outdated
// getKubernetesClient retrieves the Kubernetes cluster | ||
// client from within the cluster | ||
func getKubernetesClient() (kubernetes.Interface, prowjobclientset.Interface) { | ||
config, err := loadClusterConfig() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, in-cluster load should take care of it
prow/cmd/crier/main.go
Outdated
&workqueue.BucketRateLimiter{Limiter: rate.NewLimiter(rate.Limit(10), 100)}, | ||
)) | ||
|
||
// construct the Controller object which has all of the necessary components to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure we really need these comments? Are they from the sample code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cleaned up the comments from sample code, also cleaned up logs
prow/crier/controller.go
Outdated
// have completed existing items then shutdown | ||
defer c.queue.ShutDown() | ||
|
||
logrus.Info("Controller.Run: initiating") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make this more human? Without specific method names, etc
prow/crier/controller.go
Outdated
c.informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ | ||
AddFunc: func(obj interface{}) { | ||
key, err := cache.MetaNamespaceKeyFunc(obj) | ||
logrus.Infof("Add prowjob: %s", key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets use a WithField
on the key here for filtering
prow/crier/controller.go
Outdated
c.queue.Add(key) | ||
} | ||
}, | ||
DeleteFunc: func(obj interface{}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why bother?
prow/crier/controller.go
Outdated
UpdateFunc: func(oldObj, newObj interface{}) { | ||
key, err := cache.MetaNamespaceKeyFunc(newObj) | ||
logrus.Infof("Update prowjob: %s", key) | ||
if err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Log the error?
prow/crier/controller.go
Outdated
|
||
// runWorker executes the loop to process new items added to the queue | ||
func (c *Controller) runWorker() { | ||
logrus.Info("Controller.runWorker: starting") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These seem a little superfluous and will spam very heavily at Info
-- might want to just remove all of the alerts that aren't embedding data
prow/crier/controller.go
Outdated
} | ||
|
||
if c.queue.NumRequeues(key) < 5 { | ||
logrus.Errorf("Controller.processNextItem: Failed processing item with key %s with error %v, retrying", key, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use WithField
on the key and WithError
on the error so we can filter these?
prow/crier/controller.go
Outdated
return true | ||
} | ||
|
||
logrus.Infof("Controller.processNextItem: object created & updated detected: %s", pj.Spec.Job) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically just WithFields(fields(pj)).Info("reconciling ProwJob")
? Why we're processing is an implementation detail
298bc91
to
77580bc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/hold
prow/cmd/crier/main.go
Outdated
) | ||
|
||
const ( | ||
resync = 0 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noticed this, not even sure what 0
will do but this should be on the order of minutes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0 means it will not do resync, per #8991 (review)
I can change the unit to minute
|
||
func (o *options) Validate() error { | ||
if o.numWorkers == 0 { | ||
return errors.New("--num-workers must be greater than 0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set a sane default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default to 1 now
return clusterConfig, nil | ||
} | ||
|
||
// TODO(krzyzacy): move this to kube??? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm about to rip out the old ProwJob client and when I do that, I will add this to the client utils for startup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok I'll punt to you :-)
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: krzyzacy, stevekuznetsov The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I've referenced this from https://github.com/trstringer/k8s-controller-custom-resource and gonna play around with this and fill in more logics.
Open the PR early to see if this is the correct way of implementing an informer, and gonna try to make it work with the gerrit cluster first :-)
/area prow
/assign
cc @stevekuznetsov @cjwagner
also cc potential users wants report feature in gerrit: @yutongz @AishSundar