Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GkeStartPodOperator] - Kubernetes client request can hand indefinitely #36802

Closed
1 of 2 tasks
IKholopov opened this issue Jan 15, 2024 · 4 comments
Closed
1 of 2 tasks
Assignees
Labels
area:providers good first issue kind:bug This is a clearly a bug provider:google Google (including GCP) related issues

Comments

@IKholopov
Copy link
Contributor

IKholopov commented Jan 15, 2024

Apache Airflow Provider(s)

google

Versions of Apache Airflow Providers

apache-airflow-providers-google==10.12.0

Apache Airflow version

2.6.3

Operating System

Ubuntu 20.04.6

Deployment

Other

Deployment details

N/A

What happened

In a DAG with ~500 GkeStartPodOperator tasks (running pods on another cluster, hosted on GKE) we discovered that operator execution hangs polling logs in ~0.2% of the task instances. Based on logs, the execution halts in the call inside kubernetes client (read_namespaced_pod_log to be exact).

Only after the DAG run timeout (hours later), when SIGTERM is dispatched to the task run process, execution resumes, attempts to retry to fetch logs and pod status, but those have already been garbage collected.

This looks exactly like kubernetes-client/python#1234 (comment). After running the same deployment in the deferred mode, 1 task also ended up being locked up in a similar way, this time for another call (for creation):

Traceback (most recent call last):
  File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 310, in run_pod_async
    resp = self._client.create_namespaced_pod(
  File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7356, in create_namespaced_pod
    return self.create_namespaced_pod_with_http_info(namespace, body, **kwargs)  # noqa: E501
  File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7455, in create_namespaced_pod_with_http_info
    return self.api_client.call_api(
  File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
  File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 391, in request
    return [self.rest_client.POST](http://self.rest_client.post/)(url,
  File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 275, in POST
    return self.request("POST", url,
  File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 168, in request
    r = self.pool_manager.request(
  File "/opt/python3.8/lib/python3.8/site-packages/urllib3/request.py", line 81, in request
    return self.request_encode_body(
  File "/opt/python3.8/lib/python3.8/site-packages/urllib3/request.py", line 173, in request_encode_body
    return self.urlopen(method, url, **extra_kw)
  File "/opt/python3.8/lib/python3.8/site-packages/urllib3/poolmanager.py", line 376, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/opt/python3.8/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "/opt/python3.8/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/opt/python3.8/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request
    httplib_response = conn.getresponse()
  File "/opt/python3.8/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/opt/python3.8/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/opt/python3.8/lib/python3.8/http/client.py", line 277, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/opt/python3.8/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "/opt/python3.8/lib/python3.8/ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "/opt/python3.8/lib/python3.8/ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
  File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1521, in signal_handler
    raise AirflowException("Task received SIGTERM signal")

I believe this is specific to GkeStartPodOperator, as KubernetesHook does have the mechanism ensuring TCP keep alive is configured by default:

, but GKEPodHook does not:
def get_conn(self) -> client.ApiClient:

What you think should happen instead

GKEPodHook should reuse the same socket configuration used in KubernetesHook and configure TCP Keepalive (unless disabled).

How to reproduce

Run ~500 tasks on GKE with spot VMs. There is no reliable repro, but the problem has been clearly documented before and fixed for CNCF-k8s provider: #11406.

Anything else

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@IKholopov IKholopov added area:providers kind:bug This is a clearly a bug needs-triage label for new issues that we didn't triage yet labels Jan 15, 2024
Copy link

boring-cyborg bot commented Jan 15, 2024

Thanks for opening your first issue here! Be sure to follow the issue template! If you are willing to raise PR to address this issue please do so, no need to wait for approval.

@vatsrahul1001 vatsrahul1001 added the provider:google Google (including GCP) related issues label Jan 16, 2024
@dirrao
Copy link
Contributor

dirrao commented Jan 22, 2024

@IKholopov Thanks for reporting the issue.
Looks like its a bug. It requires further investigation.

@dirrao dirrao removed the needs-triage label for new issues that we didn't triage yet label Jan 22, 2024
@MaksYermak
Copy link
Contributor

Hello Team!
Now I am investigating this issue and then I will try to prepare a fix for this.

@eladkal
Copy link
Contributor

eladkal commented Dec 31, 2024

Fixed in #36999

@eladkal eladkal closed this as completed Dec 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:providers good first issue kind:bug This is a clearly a bug provider:google Google (including GCP) related issues
Projects
None yet
Development

No branches or pull requests

5 participants