Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eventlistener expose information about cluster #1070

Closed
chmouel opened this issue Apr 26, 2021 · 6 comments
Closed

Eventlistener expose information about cluster #1070

chmouel opened this issue Apr 26, 2021 · 6 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@chmouel
Copy link
Member

chmouel commented Apr 26, 2021

Expected Behavior

Not showing the namespace where the eventlistenner is running

Actual Behavior

{"eventListener":"openshift-pipelines-ascode-interceptor","namespace":"pipelines","eventID":"773414d6-f8e7-4b4e-a3fb-f99d7758cfa6"}

Additional Info

It's usually bad security practice to expose cluster internal information to the public, since the eventlistenner are usually exposed to the internet for GitHUB or other VCS webhooks it may be better to hide this information.

@chmouel chmouel added the kind/bug Categorizes issue or PR as related to a bug. label Apr 26, 2021
@afrittoli
Copy link
Member

Nice catch. The eventListener part too might be considered sensitive, as it implies the service name.
We could perhaps make that information available as opt-in via the service configuration?

@afrittoli afrittoli changed the title Eventlistenner expose information about cluster Eventlistener expose information about cluster Apr 26, 2021
@dibyom
Copy link
Member

dibyom commented Apr 27, 2021

It's usually bad security practice to expose cluster internal information to the public, since the eventlistenner are usually exposed to the internet for GitHUB or other VCS webhooks it may be better to hide this information.

Agreed. Is there a particular issue that exposing the namespace/el name exposes us to?
The trade-off here is around ease of debugging. Having the EL name and the namespace along with the eventID gives one all the required information to look up the logs for a particular event for debugging. Without those two, the user would have to manually figure out which event listener/namespace the webhook is connected.

@tekton-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 15, 2021
@tekton-robot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 14, 2021
@tekton-robot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants