Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to find any logs to tail makes pods unhelthy #808

Closed
alikhil opened this issue Jul 29, 2019 · 14 comments
Closed

Unable to find any logs to tail makes pods unhelthy #808

alikhil opened this issue Jul 29, 2019 · 14 comments
Labels
component/agent keepalive An issue or PR that will be kept alive and never marked as stale. type/question

Comments

@alikhil
Copy link

alikhil commented Jul 29, 2019

Describe the bug
Promtail respond with 500 Not ready: Unable to find any logs to tail. Please verify permissions, volumes, scrape_config, etc. if there are no logs to scrape which match scrape condition.

To Reproduce
Steps to reproduce the behavior:

  1. Started loki (master-f2bec3b0)
  2. Started promtail (master-f2bec3b0) to tail pods with label (monitor: spring-app)
  3. Start deployment with number of pods less than number of nodes in your cluster
  4. Check list of promtail pods and see that in nodes where is no deployment matching label selector promtail pods are failed to start

Expected behavior
All pods in all existing minion nodes will be up and running.

Environment:

  • Infrastructure: Kubernetes
  • Deployment tool: helm

Screenshots, promtail config, or terminal output
Logs from failing pod:

level=warn ts=2019-07-29T10:17:59.433772066Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T10:17:59.511098591Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T10:17:59.511380636Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T10:17:59.511513947Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T10:17:59.511627719Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=info ts=2019-07-29T10:17:59.512196582Z caller=kubernetes.go:192 component=discovery discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-07-29T10:17:59.514231267Z caller=server.go:120 http=[::]:3101 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2019-07-29T10:17:59.514633991Z caller=main.go:55 msg="Starting Promtail" version="(version=master-f2bec3b0, branch=master, revision=f2bec3b0)"
level=warn ts=2019-07-29T10:18:10.670280612Z caller=logging.go:49 msg="GET /ready (500) 151.423µs Response: \"Not ready: Unable to find any logs to tail. Please verify permissions, volumes, scrape_config, etc.\\n\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: kube-probe/1.11; "
level=warn ts=2019-07-29T10:18:20.669893429Z caller=logging.go:49 msg="GET /ready (500) 122.642µs Response: \"Not ready: Unable to find any logs to tail. Please verify permissions, volumes, scrape_config, etc.\\n\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: kube-probe/1.11; "
level=warn ts=2019-07-29T10:18:30.670121286Z caller=logging.go:49 msg="GET /ready (500) 342.837µs Response: \"Not ready: Unable to find any logs to tail. Please verify permissions, volumes, scrape_config, etc.\\n\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: kube-probe/1.11; "

Logs from running pod:

level=warn ts=2019-07-29T09:45:36.376487811Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T09:45:36.37702662Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T09:45:36.377624682Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T09:45:36.377787544Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=warn ts=2019-07-29T09:45:36.377903092Z caller=filetargetmanager.go:98 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=info ts=2019-07-29T09:45:36.378411566Z caller=kubernetes.go:192 component=discovery discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-07-29T09:45:36.466269594Z caller=server.go:120 http=[::]:3101 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2019-07-29T09:45:36.467184037Z caller=main.go:55 msg="Starting Promtail" version="(version=master-f2bec3b0, branch=master, revision=f2bec3b0)"
level=info ts=2019-07-29T09:45:41.668433219Z caller=filetargetmanager.go:257 msg="Adding target" key="{app=\"my-app\", category=\"backend\", container_name=\"my-app\", env=\"development\", instance=\"my-app-857d7c5d56-6vmsw\", job=\"development/my-app\", monitor=\"spring-app\", namespace=\"development\", pod_template_hash=\"4138371812\"}"
level=info ts=2019-07-29T09:45:41.669612293Z caller=tailer.go:73 msg="start tailing file" path=/var/log/pods/a44eb032-ad23-11e9-b35e-fa163ecb059f/my-app/0.log
2019/07/29 09:45:41 Seeked /var/log/pods/a44eb032-ad23-11e9-b35e-fa163ecb059f/my-app/0.log - &{Offset:0 Whence:0}
$ kubectl get pods 
promtail-4j9vx   1/1     Running   0          1h
promtail-6zfx2   1/1     Running   0          1h
promtail-f968n   1/1     Running   0          57m
promtail-fjk8c   0/1     Running   0          1m
promtail-fp8jc   1/1     Running   0          57m
promtail-h6vg6   1/1     Running   0          1h
promtail-qjjzf   0/1     Running   0          1m
promtail-ztktd   1/1     Running   0          1h
@steven-sheehy
Copy link
Contributor

That's expected. You need to fix your scrape config to be able to scrape at least one pod. At minimum, it should be able to scrape itself.

@alikhil
Copy link
Author

alikhil commented Jul 30, 2019

@steven-sheehy what if I don't want to scrape logs from all the pods in the cluster and scrape only pods of selected apps (which can be deployed only on some nodes, not all)? Seems like such a usecase is missed now.

I think it can be solved by passing some flag like IGNORE_NO_LOGS.

@pennpeng
Copy link

pennpeng commented Aug 5, 2019

+1

@geekodour
Copy link

geekodour commented Aug 9, 2019

@alikhil
these two links might be useful to get you scrape_config right, you can select what you want to scrape using relabel configs:

https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config

@alikhil
Copy link
Author

alikhil commented Aug 13, 2019

@geekodour thank you! Could you tell me what is mean to be right for scrape_config so that promtail will not fail if there is nothing to scrape?

@stale
Copy link

stale bot commented Sep 12, 2019

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Sep 12, 2019
@alikhil
Copy link
Author

alikhil commented Sep 12, 2019

Do not stale

@stale stale bot removed the stale A stale issue or PR that will automatically be closed. label Sep 12, 2019
@stale
Copy link

stale bot commented Oct 12, 2019

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Oct 12, 2019
@cyriltovena cyriltovena added the keepalive An issue or PR that will be kept alive and never marked as stale. label Oct 12, 2019
@stale stale bot removed the stale A stale issue or PR that will automatically be closed. label Oct 12, 2019
@DonnieBwin
Copy link

hello it is that been solved?

@HuJake
Copy link

HuJake commented Dec 4, 2019

I have the same problem, how can I solve it?

NAME                 READY   STATUS    RESTARTS   AGE
test-loki-0           1/1     Running   0          5h49m
test-promtail-bd9h6   0/1     Running   0          3m42s
test-promtail-bpfsc   0/1     Running   0          3m42s
test-promtail-chrcv   0/1     Running   0          3m42s
test-promtail-dq2jw   0/1     Running   0          3m42s
test-promtail-j4sxg   0/1     Running   0          3m42s
caller=logging.go:49 msg="GET /ready (500) 40.895µs Response: \"Not ready: Unable to find any logs to tail. Please verify permissions, volumes, scrape_config, etc.\\n\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: kube-probe/1.14; "

I found entering the pod for observation,
found positions.yaml:
filename: /run/positions.yaml is empty
Is there a problem?

@slim-bean
Copy link
Collaborator

Generally we would suggest at least tailing the promtail log to avoid this issue.

If you don't want to do this I suggest just removing the readiness probe entirely.

@slim-bean
Copy link
Collaborator

For anyone who was following this ticket #1920 introduced a flag to essentially always make the /ready endpoint return 200

@imranrazakhan
Copy link

imranrazakhan commented Aug 11, 2020

@slim-bean In my case docker log driver was set as journald and i changed it to "log-driver": "json-file" and it start working.

@fabiopaiva
Copy link

Following @slim-bean suggestions, I got it working with the following configuration:

config:
  snippets:
    extraRelabelConfigs:
      # Combine labels in order to keep logs from NGINX Ingress and Promtail # https://github.com/grafana/loki/issues/808#issuecomment-592698307
      - source_labels: [ __meta_kubernetes_pod_label_app_kubernetes_io_instance, __meta_kubernetes_pod_label_app_kubernetes_io_name ]
        separator: ';'
        target_label: combined_labels
      - source_labels: [ combined_labels ]
        action: keep
        regex: alb-ingress-nginx;.*|.*;promtail

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/agent keepalive An issue or PR that will be kept alive and never marked as stale. type/question
Projects
None yet
Development

No branches or pull requests

10 participants