Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix infinite failure on Kubernetes watch #6504

Merged
merged 1 commit into from
Mar 12, 2018
Merged

Conversation

vjsamuel
Copy link
Contributor

@vjsamuel vjsamuel commented Mar 7, 2018

This PR fixes #6503

How to reproduce: Run filebeat pointing to minikube.

minikube ssh
sudo su

ps aux | grep localkube
kill -9 process_id

This will force a failure on the API server, and when the API server comes back up it will not be able to serve up the last resource version that we had requested with the failure:

type:"ERROR" object:<raw:"k8s\000\n\014\n\002v1\022\006Status\022C\n\004\n\000\022\000\022\007Failure\032)too old resource version: 310742 (310895)\"\004Gone0\232\003\032\000\"\000" >  typeMeta:<apiVersion:"v1" kind:"Status" > raw:"\n\004\n\000\022\000\022\007Failure\032)too old resource version: 310742 (310895)\"\004Gone0\232\003" contentEncoding:"" contentType:""  <nil>

In such scenarios the only mitigation would be to move the resource version to the latest. Scenarios like this would be addressed by client-go. The reason why the code fails with error is because we pass a Pod resource to do the watcher.Next() in this scenario the resource that is attempted to be parsed is an Error resource and the protobuf unmarshalling fails. This is a limitation in the client that we use as the resource needs to be passed explicitly.

This fix is not the best in the world as it might miss few state changes.

@elasticmachine
Copy link
Collaborator

Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?

@vjsamuel
Copy link
Contributor Author

vjsamuel commented Mar 7, 2018

@exekias i stole the binary back off method from your attempt to fix this. Apologies for that :)

if !(err == io.EOF || err == io.ErrUnexpectedEOF) {
// This is an error event which can be recovered by moving to the latest resource verison
logp.Info("kubernetes: Ignoring event, moving to most recent resource version")
w.lastResourceVersion = ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may need to resync the stored objects, e.g. what would happen if we lose a delete event while we reconnect?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been doing some tests. At least when I reproduce this last resource version is increased by several hundred while no real change happened.

I agree with you @jsoriano, a resync may be needed to guarantee no events are missed during the reconnect.

Still, taking into account the severity of this issue, I'm ok with merging + backporting this as it is, as it improves the current situation (infinite loop + crash). I can do some more research and come up with a resync mechanism afterwards.

What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we are good to merge this one in.

@exekias exekias added bug libbeat needs_backport PR is waiting to be backported to other branches. labels Mar 11, 2018
@exekias
Copy link
Contributor

exekias commented Mar 11, 2018

jenkins, test this please

@vjsamuel
Copy link
Contributor Author

+1 the reconciliation might be a bigger change as we need to do some state management. we can unblock customers and fix that one in a subsequent PR.

@vjsamuel
Copy link
Contributor Author

@exekias based on this comment, it seems to be normal:
kubernetes/kubernetes#55230 (comment)

@exekias
Copy link
Contributor

exekias commented Mar 11, 2018

Failing test is not related to this change

@exekias exekias merged commit a44818c into elastic:master Mar 12, 2018
@exekias exekias removed the needs_backport PR is waiting to be backported to other branches. label Mar 12, 2018
exekias pushed a commit to exekias/beats that referenced this pull request Mar 12, 2018
ruflin pushed a commit that referenced this pull request Mar 12, 2018
…6530)

Cherry-pick of PR #6504 to 6.2 branch. Original message: 

This PR fixes #6503

How to reproduce: Run filebeat pointing to minikube. 

```
minikube ssh
sudo su

ps aux | grep localkube
kill -9 process_id
```

This will force a failure on the API server, and when the API server comes back up it will not be able to serve up the last resource version that we had requested with the failure:
```
type:"ERROR" object:<raw:"k8s\000\n\014\n\002v1\022\006Status\022C\n\004\n\000\022\000\022\007Failure\032)too old resource version: 310742 (310895)\"\004Gone0\232\003\032\000\"\000" >  typeMeta:<apiVersion:"v1" kind:"Status" > raw:"\n\004\n\000\022\000\022\007Failure\032)too old resource version: 310742 (310895)\"\004Gone0\232\003" contentEncoding:"" contentType:""  <nil>
```

In such scenarios the only mitigation would be to move the resource version to the latest. Scenarios like this would be addressed by `client-go`. The reason why the code fails with error is because we pass a `Pod` resource to do the `watcher.Next()` in this scenario the resource that is attempted to be parsed is an `Error` resource and the protobuf unmarshalling fails. This is a limitation in the client that we use as the resource needs to be passed explicitly. 

This fix is not the best in the world as it might miss few state changes.
@vjsamuel vjsamuel deleted the fix_6503 branch July 25, 2018 06:09
leweafan pushed a commit to leweafan/beats that referenced this pull request Apr 28, 2023
…atch (elastic#6530)

Cherry-pick of PR elastic#6504 to 6.2 branch. Original message: 

This PR fixes elastic#6503

How to reproduce: Run filebeat pointing to minikube. 

```
minikube ssh
sudo su

ps aux | grep localkube
kill -9 process_id
```

This will force a failure on the API server, and when the API server comes back up it will not be able to serve up the last resource version that we had requested with the failure:
```
type:"ERROR" object:<raw:"k8s\000\n\014\n\002v1\022\006Status\022C\n\004\n\000\022\000\022\007Failure\032)too old resource version: 310742 (310895)\"\004Gone0\232\003\032\000\"\000" >  typeMeta:<apiVersion:"v1" kind:"Status" > raw:"\n\004\n\000\022\000\022\007Failure\032)too old resource version: 310742 (310895)\"\004Gone0\232\003" contentEncoding:"" contentType:""  <nil>
```

In such scenarios the only mitigation would be to move the resource version to the latest. Scenarios like this would be addressed by `client-go`. The reason why the code fails with error is because we pass a `Pod` resource to do the `watcher.Next()` in this scenario the resource that is attempted to be parsed is an `Error` resource and the protobuf unmarshalling fails. This is a limitation in the client that we use as the resource needs to be passed explicitly. 

This fix is not the best in the world as it might miss few state changes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Infinite loop while watching for Kubernetes events
4 participants