Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncaught exceptions within the streaming pull code. #7709

Closed
kamalaboulhosn opened this issue Apr 15, 2019 · 22 comments · Fixed by #7863
Closed

Uncaught exceptions within the streaming pull code. #7709

kamalaboulhosn opened this issue Apr 15, 2019 · 22 comments · Fixed by #7863
Assignees
Labels
api: pubsub Issues related to the Pub/Sub API. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@kamalaboulhosn
Copy link

kamalaboulhosn commented Apr 15, 2019

This comes from a StackOverflow question. There are internal exceptions that are not being caught and result in the client library no longer delivery messages.

Exception in thread Thread-LeaseMaintainer:
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 549, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 466, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
    status = StatusCode.UNAVAILABLE
    details = "channel is in state TRANSIENT_FAILURE"
    debug_error_string = "{"created":"@1554568036.075280756","description":"channel is in state TRANSIENT_FAILURE","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2294,"grpc_status":14}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/google/api_core/retry.py", line 179, in retry_target
    return target()
  File "/usr/local/lib/python3.6/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 channel is in state TRANSIENT_FAILURE

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/leaser.py", line 146, in maintain_leases
    [requests.ModAckRequest(ack_id, p99) for ack_id in ack_ids]
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/dispatcher.py", line 152, in modify_ack_deadline
    self._manager.send(request)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/streaming_pull_manager.py", line 268, in send
    self._send_unary_request(request)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/streaming_pull_manager.py", line 259, in _send_unary_request
    ack_deadline_seconds=deadline,
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/_gapic.py", line 45, in <lambda>
    fx = lambda self, *a, **kw: wrapped_fx(self.api, *a, **kw)  # noqa
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/gapic/subscriber_client.py", line 723, in modify_ack_deadline
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/usr/local/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/google/api_core/retry.py", line 270, in retry_wrapped_func
    on_error=on_error,
  File "/usr/local/lib/python3.6/site-packages/google/api_core/retry.py", line 199, in retry_target
    last_exc,
  File "<string>", line 3, in raise_from
google.api_core.exceptions.RetryError: Deadline of 600.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x7f86228cd400>, subscription: "projects/xxxxx-dev/subscriptions/telemetry-sub"
ack_deadline_seconds: 10
ack_ids: "QBJMJwFESVMrQwsqWBFOBCEhPjA-RVNEUAYWLF1GSFE3GQhoUQ5PXiM_NSAoRRoHIGoKOUJdEmJoXFx1B1ALEHQoYnxvWRYFCEdReF1YHQdodGxXOFUEHnN1Y3xtWhQDAEFXf3f8gIrJ38BtZho9WxJLLD5-LDRFQV4"
, metadata=[('x-goog-api-client', 'gl-python/3.6.8 grpc/1.19.0 gax/1.8.2 gapic/0.40.0')]), last exception: 503 channel is in state TRANSIENT_FAILURE

Thread-ConsumeBidirectionalStream caught unexpected exception Deadline of 600.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x7f86228cda60>, subscription: "projects/xxxxx-dev/subscriptions/telemetry-sub"
ack_deadline_seconds: 10
ack_ids: "QBJMJwFESVMrQwsqWBFOBCEhPjA-RVNEUAYWLF1GSFE3GQhoUQ5PXiM_NSAoRRoHIGoKOUJdEmJoXFx1B1ALEHQoYnxvWRYFCEdReF1YHAdodGxXOFUEHnN1aXVoWxAIBEdXeXf8gIrJ38BtZho9WxJLLD5-LDRFQV4"
, metadata=[('x-goog-api-client', 'gl-python/3.6.8 grpc/1.19.0 gax/1.8.2 gapic/0.40.0')]), last exception: 503 channel is in state TRANSIENT_FAILURE and will exit.

The user who reported the error was using the following versions:

python == 3.6.5
google-cloud-pubsub == 0.40.0 # but this has behaved similarly for at least the last several versions
google-api-core == 1.8.2
google-api-python-client == 1.7.8

@sduskis sduskis added api: pubsub Issues related to the Pub/Sub API. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. labels Apr 16, 2019
@yoshi-automation yoshi-automation added the triage me I really want to be triaged. label Apr 16, 2019
@tseaver tseaver removed the triage me I really want to be triaged. label Apr 16, 2019
@jakeczyz
Copy link

We're seeing this behavior with the following stack trace as well. Errors are thrown and the subscriber stops receiving messages without the main thread (future.result(timeout=x)) raising an exception (other than the timeout).

Error in queue callback worker: Deadline of 600.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x7f6441ece6a8>, subscription: "projects/my-project/subscriptions/my-subscription"
ack_ids: "IkVBXkASTCQXRElTK0MLKlgRTgQhIT4wPkVTRFAGFixdRkhRNxkIaFEOT14jPzUgKEUSAABneSQZRhIKB1xcdQdRDB8jfjV2a1NBUgRPU3RfcysvV1ledwNWDx17fWd2a18TCSq8gaXa0elrZh49WhJLLD5-Lw"
, metadata=[('x-goog-api-client', 'gl-python/3.6.8 grpc/1.19.0 gax/1.8.0 gapic/0.39.1')]), last exception: 503 channel is in state TRANSIENT_FAILURE
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 549, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 466, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
	status = StatusCode.UNAVAILABLE
	details = "channel is in state TRANSIENT_FAILURE"
	debug_error_string = "{"created":"@1555805428.654693392","description":"channel is in state TRANSIENT_FAILURE","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2294,"grpc_status":14}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/google/api_core/retry.py", line 179, in retry_target
    return target()
  File "/usr/local/lib/python3.6/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 channel is in state TRANSIENT_FAILURE

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/helper_threads.py", line 108, in __call__
    self._callback(items)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/dispatcher.py", line 97, in dispatch_callback
    self.ack(batched_commands.pop(requests.AckRequest))
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/dispatcher.py", line 117, in ack
    self._manager.send(request)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/streaming_pull_manager.py", line 267, in send
    self._send_unary_request(request)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/streaming_pull_manager.py", line 243, in _send_unary_request
    subscription=self._subscription, ack_ids=list(request.ack_ids)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/_gapic.py", line 45, in <lambda>
    fx = lambda self, *a, **kw: wrapped_fx(self.api, *a, **kw)  # noqa
  File "/usr/local/lib/python3.6/site-packages/google/cloud/pubsub_v1/gapic/subscriber_client.py", line 788, in acknowledge
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/usr/local/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/google/api_core/retry.py", line 270, in retry_wrapped_func
    on_error=on_error,
  File "/usr/local/lib/python3.6/site-packages/google/api_core/retry.py", line 199, in retry_target
    last_exc,
  File "<string>", line 3, in raise_from
google.api_core.exceptions.RetryError: Deadline of 600.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x7f6441ece6a8>, subscription: "projects/my-project/subscriptions/my-subsciption"
ack_ids: "IkVBXkASTCQXRElTK0MLKlgRTgQhIT4wPkVTRFAGFixdRkhRNxkIaFEOT14jPzUgKEUSAABneSQZRhIKB1xcdQdRDB8jfjV2a1NBUgRPU3RfcysvV1ledwNWDx17fWd2a18TCSq8gaXa0elrZh49WhJLLD5-Lw"
, metadata=[('x-goog-api-client', 'gl-python/3.6.8 grpc/1.19.0 gax/1.8.0 gapic/0.39.1')]), last exception: 503 channel is in state TRANSIENT_FAILURE

Here's what our code looks like:

future = client.subscribe(path, callback=callback, flow_control=flow_control)

while True:
        try:
            future.result(timeout=1)
        except pubsub_v1.exceptions.TimeoutError:
            pass
         except Exception as exc:
            logger.exception('Got uncaught exception in subscriber or callback'
                             f'; Traceback:{traceback.format_exc()} Retrying '
                             f'in 1s. Detail: {exc}')
            time.sleep(1)
            consecutive_errors += 1
            last_error = exc
            continue
        else:
            consecutive_errors = 0
        if consecutive_errors > MAX_CONSECUTIVE_ERRORS:
            logger.exception('Too many consecutive errors in PubSub consumer: '
                             f'{consecutive_errors}. Last exc: {last_error} '
                             'Re-raising last.')
            raise  #re-raise whatever last error was
        sleep(...)

The Exception branch is never taken, and the callback ceases to be called after such an error, requiring main thread restart.

@plamut
Copy link
Contributor

plamut commented Apr 26, 2019

I believe this error occurs if the underlying channel enters the TRANSIENT_FAILURE state and remains in it for too long, i.e. longer than the total_timeout_millis setting of the subscriber client.

I was not able to produce the bug with a sample pub/sub application running on Kubernetes, but I did manage to trigger the reported scenario locally by doing the the following:

--- /home/peter/workspace/google-cloud-python/venv-3.6/lib/python3.6/site-packages/grpc/_channel.py     2019-04-23 17:01:39.282064676 +0200
+++ /home/peter/workspace/google-cloud-python/venv-3.6/lib/python3.6/site-packages/grpc/_channel.py     2019-04-25 15:49:05.220317794 +0200
@@ -456,6 +456,16 @@
 
 
 def _end_unary_response_blocking(state, call, with_call, deadline):
+    #####################
+    import datetime
+    minute = datetime.datetime.now().minute
+    if 45 <= minute <= 56:
+        state.code = grpc.StatusCode.UNAVAILABLE
+        state.details = "channel is in **fake** TRANSIENT_FAILURE state"
+        state.debug_error_string = (
+            "transient failure is faked during a fixed time window in an hour"
+        )
+    ###########################
     if state.code is grpc.StatusCode.OK:
         if with_call:
             rendezvous = _Rendezvous(state, call, None, deadline)

The patch fakes a channel error during particular minutes in an hour (adjust as necessary).

  • Start publishing messages and start a subscriber using the streaming pull (FWIW, I used my own test publisher and subscriber - link).
  • Wait for 10 minutes and some moderate amount of time more, then check the logs.

Result:
Eventually a RetryError is raised, and several threads exit (e.g. Thread-ConsumeBidirectionalStream). However, the main thread keeps running, but no messages are received and processed anymore. Subscriber must be manually be stopped with Ctrl+C.

NOTE: It is not necessary to wait for full 10+ minutes, one can reduce the total_timeout_millis setting in subscriber settings.


What happens is that if the subscriber has been retrying for too long, a RetryError is raised in the retry wrapper. This error is considered non-retryable, and the subscriber stopping pulling messages is actually expected behavior IMO. Will look into it.

What should happen, however, is propagating the error to the main thread (and shutting everything down cleanly in the background), giving users a chance to catch the error and react to it as they see fit.

Will discuss if this is the the expected way of handling this, and then work on a fix. Thank you for reporting the issue!

@jakeczyz
Copy link

@plamut Thanks very much for digging into this! A bit surprised my team are the first to report it. Please do post here if we can provide more details or testing, or when a fix or an ETA is available. :)

@plamut
Copy link
Contributor

plamut commented Apr 26, 2019

@jakeczyz AFAIK there have been several independent reports of the same (or similar) bug in the past, including the non-Python clients, but it was (very) difficult to reproduce it. I could not reproduce it either, thus only suspect that this is the true cause kicking in on random occasions. The tracebacks are very similar, though, which is promising.

I do not have an ETA yet, but expect to discuss this with others next week - will post more, when I know more. :)

@plamut
Copy link
Contributor

plamut commented May 3, 2019

Just as quick update, it appears to me that in order to propagate the RetryError to the user code, a change might be necessary in one of the Pub/Sub client dependencies (API core, specifically).

Right now the background consumer thread does not propagate any errors and assumes that all error handling is done though the underlying RPC. However, if a RetryError occurs, the consumer thread terminates, but the underlying gRPC channel does not terminate (it's in the TRANSIENT_FAILURE state after all).

The subscriber client shuts itself down when the channel terminates, but since the latter does not happen, the client shutdown does not happen as well, and the future result never gets set, despite the consumer thread not running anymore.

Changes to bidi.BackgroundConsumer might be needed, although that will have to be coordinated with the teams working on other libraries that could be affected by that.

Update: API core changes will not be needed after all, the subscriber client can properly respond to retry errors on its own.

@plamut
Copy link
Contributor

plamut commented May 7, 2019

A fix for this issue has been merged. It makes sure that if a RetryError happens in the background, it is properly propagated to the main thread, and a clean streaming pull shutdown is triggered. I don't know about the next release date, though.

Again, I was not able to actually reproduce the error in a production setup, but was able to reproduce similar tracebacks locally by faking it. Should the fix prove to be insufficient, feel free to comment here with new info (and thanks in advance!).

@jakeczyz
Copy link

jakeczyz commented May 7, 2019

Thanks. We'll report back if it still seems to break this way after the new code is released and available on pypi. We see this problem 1-2 times a week; so, it won't be long before we have confirmation. Thanks again for your work on fixing this!

@sreetamdas
Copy link

Facing the same issue:

Traceback (most recent call last): 
File "/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, 
in error_remapped_callable return callable_(*args, **kwargs) 
File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line 604, 
in __call__ return _end_unary_response_blocking(state, call, False, None) 
File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line 506, 
in _end_unary_response_blocking raise _Rendezvous(state, None, None, deadline) grpc._channel._Rendezvous: 
<_Rendezvous of RPC that terminated with: 

status = StatusCode.DEADLINE_EXCEEDED 
details = "Deadline Exceeded" 
debug_error_string = "
				{"created":"@1573039594.532406734",
				"description":"Error received from peer ipv4:172.217.214.95:443",
				"file":"src/core/lib/surface/call.cc",
				"file_line":1055,
				"grpc_message":"Deadline Exceeded",
				"grpc_status":4}" 
> 

is there a fix for this and what's going wrong in the first place?

@plamut
Copy link
Contributor

plamut commented Nov 6, 2019

@sreetamdas The fix for the original issue has been merged several releases ago, but there might be another bug that results in a similar error.

Which PubSub client and grpcio versions were used when the error happened? Did the error propagate to the main thread and shut down the streaming pull? Are there any notable circumstances that make the error more probable, or does it occur seemingly randomly?

Any extra information could be useful, thanks!

@sreetamdas
Copy link

Thanks for replying @plamut!

I am currently using the the google-cloud-pubsub client, running on a GCP Cloud Function. I am using the client in order to pull messages from my subscription.

Something to note: this error doesn't show up all the time. In fact, its hasnt shown up in the past 24 hours, while it came up about 7 out of 10 times whenever I'd try to run my Cloud Function.

Additionally, as part of clearing up my old data in PubSub, I'd resorted to deleting the subscription purging messages from my subscription; but I'm not entirely sure that was the root cause of this failure.

Is it possible that the Deadline Exceeded error message is referring to an ack deadline for a message?

@plamut
Copy link
Contributor

plamut commented Nov 12, 2019

@sreetamdas A DeadlineExceeded exception occurs if an operation does not complete in the expected time frame, e.g. requests failing due to temporary network issues. This error type is considered retryable, meaning that subscriber client will automatically try to reopen the message stream (when using the streaming pull).

The ACK deadline for a message is somewhat different - it's a server-side limit, and if the server does not receive an ACK request before that deadline, it will try to re-send the same message. That could happen if the client's ACK response gets lost, for instance.

Since the network is not 100% reliable, it is kind of expected that DeadlineExceeded errors will occur from time to time, but as mentioned before, the client does its best to recover from these.

Could it be that those 7/10 Cloud Function failures all happened in a short time span? It is quite possible that the network was unreliable at that time, especially when reading that the error did not repeat in the last 24 hours.

@sreetamdas
Copy link

sreetamdas commented Nov 13, 2019

I am using Cloud Scheduler to run a cronjob every hour using an http endpoint, so my Cloud Function comes up every hour and starts pulling messages So I dont think its a temporary network issue?

I've also tried invoking my Function manually as well as trying to pull messages on my local machine using google-cloud-pubsub's pubsub_v1.SubscriberClient() (I am using service account credentials for the latter), and this error comes up haphazardly for the former and 100% of the time for the latter.

Side note: Do you believe that I should contact GCP at this stage 😅 ?

@plamut
Copy link
Contributor

plamut commented Nov 13, 2019

@sreetamdas Hard to tell in a vacuum, i.e. without seeing the code, maybe some additional log output, and knowing about the exact library versions used. Is pulling the messages done synchronously (client.pull()) or asynchronously (client.subscribe())?

If it happens that often and across longer time spans, a temporary network issue can probably be excluded, indeed.

If you believe that the application setup and the code are both correct, contacting GCP is an option, as they have a much better insight into the overall setup and what is happening behind the scenes.

In any case, looking forward to any additional info that could help narrowing down the issue.

@sreetamdas
Copy link

@plamut Sorry, I genuinely completely forgot about that. Here's the (relevant) packages I'm using:

google-api-core==1.14.3
google-auth==1.6.3
google-cloud==0.34.0
google-cloud-pubsub==1.0.2
googleapis-common-protos==1.6.0
grpc-google-iam-v1==0.12.3
grpcio==1.24.1

And here's the (relevant) code snippet:

subscriber = pubsub_v1.SubscriberClient().from_service_account_json(
        "service_account.json"
    )
response = subscriber.pull(input_topic, max_messages=10)
print(">>", response)
for message in response.received_messages:
    print(message.message.data.decode("utf-8"))

I looked around and found out that in case there are no messages in the subscription, pubsub does return a 504 error, so I also tried out by ensuring there were messages present in the subscription and using the return_immediately param, same result (504 Deadline Exceeded) as before. 😢

@plamut
Copy link
Contributor

plamut commented Nov 13, 2019

@sreetamdas Thanks, I can now see that the code uses a synchronous pull method.

I was actually able to reproduce the reported behavior - if there are messages available, the code snippet works fine. On the other hand, if there are no messages, a DeadlineExceeded error (504) is raised shortly after.

Using the return_immediately=True argument, however, worked fine, and subscriber.pull() did not raise an error. Could you perhaps double check that the error in your app occurs even with return_immediately? Is it possible that the call succeeds, but there is some other part of the code that results in the same DeadlineExceeded error?

FWIW, it seems counter-intuitive to receive 504 instead of a successful empty response, i.e. without messages. I'll check with the backend team if this is intended behavior.

@plamut
Copy link
Contributor

plamut commented Nov 14, 2019

@sreetamdas Still awaiting a definite answer on the early deadline exceeded error.

BTW, I was informed that the return_immediately flag should actually not be used (it's deprecated, or likely to be deprecated soon), and some of the PubSub clients in other languages do not even expose it.

@sreetamdas
Copy link

@plamut I apologise for not having replied here sooner, but I was away. Funnily enough, in my then-ongoing search for alternate solutions, I stumbled upon a comment on an issue on this repo itself, which said that they'd faced similar issues (I believe their error was a different), but this was only after they'd upgraded their google-cloud-pubsub package to v1.0.0; upon reverting to v.0.45.0 it worked as intended (and as it had before).

I was a bit skeptical that it'd work, but lo and behold, my pipelines are working (flawlessly) again.

I'll dig out that comment, and thanks again for your time. I wish I could provide you steps to reproduce the error on your end, but its pretty much just a standard pull (I'm not using the return_immediately option), so there's not much to work on. Please do let me know if I can help you with this in any way.

Thanks again!

@plamut
Copy link
Contributor

plamut commented Nov 18, 2019

@sreetamdas I actually did manage to reproduce the reported behavior, but I still appreciate your willingness to help!

Since this is a synchronous pull (as opposed to the asynchronous streaming pull this issue is about), I will open a separate issue for easier traceability.

Update: Issue created - https://github.com/googleapis/google-cloud-python/issues/9822

@radianceltd
Copy link

File "", line 3, in raise_from
google.api_core.exceptions.RetryError: Deadline of 120.0s exceeded while calling functools.partial(<function _wrap_unary_errors..error_remapped_callable at 0x10717eea0>, parent: "projects/tmw025/locations/us-central1/registries/my-registry"
, metadata=[('x-goog-request-params', 'parent=projects/tmw025/locations/us-central1/registries/my-registry'), ('x-goog-api-client', 'gl-python/3.7.0 grpc/1.25.0 gax/1.15.0 gapic/0.3.0')]), last exception: 504 Deadline Exceeded
192.168.0.55 - - [17/Dec/2019 15:33:05] "POST /test/add_gateway/ HTTP/1.1" 500 -

@radianceltd
Copy link

os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = current_app.config['GOOGLE_APPLICATION_CREDENTIALS']

    # credentials = service_account.Credentials.from_service_account_file(
    # "your-json-file - path -with-filename.json")

    client = iot_v1.DeviceManagerClient()

    parent = client.registry_path(project_id, cloud_region, registry_id)
    devices = list(client.list_devices(parent=parent))

@plamut
Copy link
Contributor

plamut commented Dec 17, 2019

@radianceltd Is this error related to the PubSub synchronous pull, or...? It seems more like an issue with the Cloud IoT?

@bisandip
Copy link

run into wamp localhost php version 7.2 error found
C:\wamp64\www\google-ads-php>php examples/BasicOperations/GetCampaigns.php --customerId 3406339000

Fatal error: Uncaught BadMethodCallException: Streaming calls are not supported while using the REST transport. in C:\wamp64\www\google-ads-php\vendor\google\gax\src\Transport\HttpUnaryTransportTrait.php:125
Stack trace:
#0 C:\wamp64\www\google-ads-php\vendor\google\gax\src\Transport\HttpUnaryTransportTrait.php(63): Google\ApiCore\Transport\RestTransport->throwUnsupportedException()
#1 C:\wamp64\www\google-ads-php\vendor\google\gax\src\GapicClientTrait.php(511): Google\ApiCore\Transport\RestTransport->startServerStreamingCall(Object(Google\ApiCore\Call), Array)
#2 C:\wamp64\www\google-ads-php\vendor\google\gax\src\Middleware\CredentialsWrapperMiddleware.php(61): Google\Ads\GoogleAds\V5\Services\Gapic\GoogleAdsServiceGapicClient->Google\ApiCore{closure}(Object(Google\ApiCore\Call), Array)
#3 C:\wamp64\www\google-ads-php\vendor\google\gax\src\Middleware\FixedHeaderMiddleware.php(67): Google\ApiCore\Middleware\CredentialsWrapperMiddleware->__invoke(Object(Google\ApiCore\Call), Array)
#4 C:\wamp64\www\google-ads-php\vendor\g in C:\wamp64\www\google-ads-php\vendor\google\gax\src\Transport\HttpUnaryTransportTrait.php on line 125

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: pubsub Issues related to the Pub/Sub API. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants