Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Backport][Release-v0.22.0] Support eventing metrics #234

Conversation

skonto
Copy link

@skonto skonto commented Jun 10, 2021

Backport: knative-extensions#688
To be completely aligned opened: openshift/knative-eventing#1311
My goal is to add visualizations for channel metrics at the S-O side.
/cc @aliok @matzew

I am also planning to backport for 0.23. Didnt go upstream because afaik we depend on the release tag there.

skonto added 2 commits June 10, 2021 13:12
* support eventing metrics

* lint

* imports

* update with latest deps
@openshift-ci openshift-ci bot requested review from aliok and markusthoemmes June 10, 2021 10:31
@skonto skonto requested review from matzew and removed request for markusthoemmes June 10, 2021 10:31
@skonto
Copy link
Author

skonto commented Jun 11, 2021

@matzew could you approve this one too?

_, err := c.dispatcher.DispatchMessageWithRetries(
te := kncloudevents.TypeExtractorTransformer("")

bufferedMessage, err := buffering.CopyMessage(ctx, message, &te)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you doing this? You also don't free that memory invoking Message.Finish, so this is going to leak memory

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

@skonto skonto Jun 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My assumption is bufferedMessage is managed internally.
DispatchWithRetries has:

	defer func() {
		for _, msg := range messagesToFinish {
			_ = msg.Finish(nil)
		}
	}()

We discussed this in the past and here is just a copy of that idea. Transform the msg to get the type. If there is a better way (with no mem overhead) I will adapt (also upstream).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, please check the godocs for that method, it explains how it works

Copy link
Author

@skonto skonto Jun 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@slinkydeveloper Why not? The buffered msg is finished afaik. Have a pointer, what method?

type Message interface {
	MessageReader

	// Finish *must* be called when message from a Receiver can be forgotten by
	// the receiver. A QoS 1 sender should not call Finish() until it gets an acknowledgment of
	// receipt on the underlying transport.  For QoS 2 see ExactlyOnceMessage.
	//
	// Note that, depending on the Message implementation, forgetting to Finish the message
	// could produce memory/resources leaks!
	//
	// Passing a non-nil err indicates sending or processing failed.
	// A non-nil return indicates that the message was not accepted
	// by the receivers peer.
	Finish(error) error
}

For the CopyMessage it says:

// CopyMessage reads m once and creates an in-memory copy depending on the encoding of m.
// The returned copy is not dependent on any transport and can be visited many times.
// When the copy can be forgot, the copied message must be finished with Finish() message to release the memory.
// transformers can be nil and this function guarantees that they are invoked only once during the encoding process.

We discussed this in the past and this is the API you mentioned to use. What else should I do here? Have an example? All msgs are finished as DispatchWithRetries does that for the buffered one, no?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I think you're right, this seems correct, sorry for blocking this. Although I would love to explore if there's any way we can avoid this buffering...

Copy link
Author

@skonto skonto Jun 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@slinkydeveloper np, yeah copying is not something I want to do either, especially on this path. One solution could be to leak the eventype in the kafka msg header (a bit of a hack). The other option would be to get back some useful info when we write the http request since there is the last time we touch the msg.
executeRequest has a transformers parameter we never use. That method is called by DispatchWithRetries. When we write the request toEvent is called anyway and that method applies the transformers. I think there it is more efficient to pass the transformer.
In detail, the call here will eventually call http.WriteRequest which calls binding.Write which then calls this Write. Finally ToEvent is called and msg is transformed.
I think it should work if I either expand DispatchWithRetries or add a new method so I dont break dependent projects. WDYTH?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah maybe you can just pass that transformer to executeRequest then?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me try it and see if it works.

Copy link
Author

@skonto skonto Jun 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@slinkydeveloper It does work as expected so I will do the update upstream and also here. :)
I updated this PR.

@skonto
Copy link
Author

skonto commented Jun 14, 2021

/hold
I need to update upstream main and downstream eventing too first.

@skonto
Copy link
Author

skonto commented Jun 14, 2021

@slinkydeveloper One thing though that may not work is that if reply is nil and destination is not defined then error is returned and there is no request written (other cases do exist). I dont think that all cases are covered if we rely on the request written but maybe good enough middle-ground solution, I am still thinking about it. @matzew wdyth? I dont see many options here, either we keep the copy approach or have incomplete data for some errors. Even adding the field in kafka headers also requires a copy.

@skonto
Copy link
Author

skonto commented Jun 15, 2021

@slinkydeveloper gentle ping.

@skonto
Copy link
Author

skonto commented Jun 15, 2021

/unhold

@skonto
Copy link
Author

skonto commented Jun 15, 2021

/assign @slinkydeveloper

@skonto
Copy link
Author

skonto commented Jun 15, 2021

@slinkydeveloper should we get this in (gentle ping)?

@slinkydeveloper
Copy link

/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm label Jun 15, 2021
@openshift-ci
Copy link

openshift-ci bot commented Jun 15, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: skonto, slinkydeveloper

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 805c235 into openshift-knative:release-v0.22.0 Jun 15, 2021
matzew pushed a commit that referenced this pull request Jun 22, 2021
* patch vendor

* Support eventing metrics (knative-extensions#688)

* support eventing metrics

* lint

* imports

* update with latest deps

* use transformers in dispatchWithRetries instead of copying

* fix use of transformers

* updates

* pass transformers to executeRequest
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants