Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Source Amazon Seller Partner: GET_AFN_INVENTORY_DATA fails to wait for data #33508

Closed
1 task
LGPCroeder opened this issue Dec 14, 2023 · 7 comments
Closed
1 task
Assignees
Labels

Comments

@LGPCroeder
Copy link

Connector Name

source-amazon-seller-partner

Connector Version

2.5.0

What step the error happened?

None

Relevant information

I added the GET_AFN_INVENTORY_DATA stream to my existing Amazon Seller Partner source for replication, and the replication jobs succeed but the GET_AFN_INVENTORY_DATA stream errors every time a job is run. The OrderItems and Orders streams succeed on the second attempt as the GET_AFN_INVENTORY_DATA stream is not run.

Relevant log output

>> ATTEMPT 1/2

2023-12-14 18:35:39 �[44msource�[0m > Marking stream GET_AFN_INVENTORY_DATA as STARTED
2023-12-14 18:35:39 �[46mreplication-orchestrator�[0m > Attempt 0 to stream status started null:GET_AFN_INVENTORY_DATA
2023-12-14 18:35:39 �[44msource�[0m > Syncing stream: GET_AFN_INVENTORY_DATA 
2023-12-14 18:36:37 �[43mdestination�[0m > 2023-12-14 18:36:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:36:37 �[43mdestination�[0m > 2023-12-14 18:36:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:36:40 �[46mreplication-orchestrator�[0m > Attempt 0 to update stream status incomplete null:GET_AFN_INVENTORY_DATA
2023-12-14 18:36:40 �[46mreplication-orchestrator�[0m > Attempt 0 to update stream status incomplete null:GET_AFN_INVENTORY_DATA error: io.airbyte.api.client.invoker.generated.ApiException: updateStreamStatus call failed with: 400 - {"message":"Incomplete run cause must be set for runs that stopped in an incomplete state.","exceptionClassName":"io.airbyte.server.apis.StreamStatusesApiController$Validations$2","exceptionStack":["io.airbyte.server.apis.StreamStatusesApiController$Validations$2: Incomplete run cause must be set for runs that stopped in an incomplete state.","\tat io.airbyte.server.apis.StreamStatusesApiController$Validations.validate(StreamStatusesApiController.java:97)","\tat io.airbyte.server.apis.StreamStatusesApiController.updateStreamStatus(StreamStatusesApiController.java:58)","\tat io.airbyte.server.apis.$StreamStatusesApiController$Definition$Exec.dispatch(Unknown Source)","\tat io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:371)","\tat io.micronaut.context.DefaultBeanContext$4.invoke(DefaultBeanContext.java:594)","\tat io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:303)","\tat io.micronaut.web.router.RouteMatch.execute(RouteMatch.java:111)","\tat io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:103)","\tat io.micronaut.http.server.RouteExecutor.lambda$executeRoute$14(RouteExecutor.java:659)","\tat reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)","\tat reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:62)","\tat reactor.core.publisher.FluxSubscribeOn$SubscribeOnSubscriber.run(FluxSubscribeOn.java:194)","\tat io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)","\tat io.micronaut.scheduling.instrument.InvocationInstrumenterWrappedCallable.call(InvocationInstrumenterWrappedCallable.java:53)","\tat io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:129)","\tat io.micrometer.core.instrument.Timer.lambda$wrap$1(Timer.java:206)","\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)","\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)","\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)","\tat java.base/java.lang.Thread.run(Thread.java:1589)"]}
2023-12-14 18:36:41 �[46mreplication-orchestrator�[0m > Attempt 1 to update stream status incomplete null:GET_AFN_INVENTORY_DATA
2023-12-14 18:36:41 �[46mreplication-orchestrator�[0m > Attempt 1 to update stream status incomplete null:GET_AFN_INVENTORY_DATA error: io.airbyte.api.client.invoker.generated.ApiException: updateStreamStatus call failed with: 400 - {"message":"Incomplete run cause must be set for runs that stopped in an incomplete state.","exceptionClassName":"io.airbyte.server.apis.StreamStatusesApiController$Validations$2","exceptionStack":["io.airbyte.server.apis.StreamStatusesApiController$Validations$2: Incomplete run cause must be set for runs that stopped in an incomplete state.","\tat io.airbyte.server.apis.StreamStatusesApiController$Validations.validate(StreamStatusesApiController.java:97)","\tat io.airbyte.server.apis.StreamStatusesApiController.updateStreamStatus(StreamStatusesApiController.java:58)","\tat io.airbyte.server.apis.$StreamStatusesApiController$Definition$Exec.dispatch(Unknown Source)","\tat io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:371)","\tat io.micronaut.context.DefaultBeanContext$4.invoke(DefaultBeanContext.java:594)","\tat io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:303)","\tat io.micronaut.web.router.RouteMatch.execute(RouteMatch.java:111)","\tat io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:103)","\tat io.micronaut.http.server.RouteExecutor.lambda$executeRoute$14(RouteExecutor.java:659)","\tat reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)","\tat reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:62)","\tat reactor.core.publisher.FluxSubscribeOn$SubscribeOnSubscriber.run(FluxSubscribeOn.java:194)","\tat io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)","\tat io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:129)","\tat io.micrometer.core.instrument.Timer.lambda$wrap$1(Timer.java:206)","\tat io.micronaut.scheduling.instrument.InvocationInstrumenterWrappedCallable.call(InvocationInstrumenterWrappedCallable.java:53)","\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)","\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)","\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)","\tat java.base/java.lang.Thread.run(Thread.java:1589)"]}
2023-12-14 18:36:42 �[46mreplication-orchestrator�[0m > Attempt 2 to update stream status incomplete null:GET_AFN_INVENTORY_DATA
2023-12-14 18:36:42 �[46mreplication-orchestrator�[0m > Attempt 2 to update stream status incomplete null:GET_AFN_INVENTORY_DATA error: io.airbyte.api.client.invoker.generated.ApiException: updateStreamStatus call failed with: 400 - {"message":"Incomplete run cause must be set for runs that stopped in an incomplete state.","exceptionClassName":"io.airbyte.server.apis.StreamStatusesApiController$Validations$2","exceptionStack":["io.airbyte.server.apis.StreamStatusesApiController$Validations$2: Incomplete run cause must be set for runs that stopped in an incomplete state.","\tat io.airbyte.server.apis.StreamStatusesApiController$Validations.validate(StreamStatusesApiController.java:97)","\tat io.airbyte.server.apis.StreamStatusesApiController.updateStreamStatus(StreamStatusesApiController.java:58)","\tat io.airbyte.server.apis.$StreamStatusesApiController$Definition$Exec.dispatch(Unknown Source)","\tat io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:371)","\tat io.micronaut.context.DefaultBeanContext$4.invoke(DefaultBeanContext.java:594)","\tat io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:303)","\tat io.micronaut.web.router.RouteMatch.execute(RouteMatch.java:111)","\tat io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:103)","\tat io.micronaut.http.server.RouteExecutor.lambda$executeRoute$14(RouteExecutor.java:659)","\tat reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)","\tat reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:62)","\tat reactor.core.publisher.FluxSubscribeOn$SubscribeOnSubscriber.run(FluxSubscribeOn.java:194)","\tat io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)","\tat io.micronaut.scheduling.instrument.InvocationInstrumenterWrappedCallable.call(InvocationInstrumenterWrappedCallable.java:53)","\tat io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:129)","\tat io.micrometer.core.instrument.Timer.lambda$wrap$1(Timer.java:206)","\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)","\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)","\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)","\tat java.base/java.lang.Thread.run(Thread.java:1589)"]}
2023-12-14 18:37:37 �[43mdestination�[0m > 2023-12-14 18:37:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:37:37 �[43mdestination�[0m > 2023-12-14 18:37:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:38:37 �[43mdestination�[0m > 2023-12-14 18:38:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:38:37 �[43mdestination�[0m > 2023-12-14 18:38:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:39:37 �[43mdestination�[0m > 2023-12-14 18:39:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:39:37 �[43mdestination�[0m > 2023-12-14 18:39:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:40:37 �[43mdestination�[0m > 2023-12-14 18:40:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:40:37 �[43mdestination�[0m > 2023-12-14 18:40:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:41:37 �[43mdestination�[0m > 2023-12-14 18:41:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:41:37 �[43mdestination�[0m > 2023-12-14 18:41:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:42:37 �[43mdestination�[0m > 2023-12-14 18:42:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:42:37 �[43mdestination�[0m > 2023-12-14 18:42:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:43:37 �[43mdestination�[0m > 2023-12-14 18:43:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:43:37 �[43mdestination�[0m > 2023-12-14 18:43:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:44:37 �[43mdestination�[0m > 2023-12-14 18:44:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:44:37 �[43mdestination�[0m > 2023-12-14 18:44:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:45:37 �[43mdestination�[0m > 2023-12-14 18:45:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:45:37 �[43mdestination�[0m > 2023-12-14 18:45:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:46:37 �[43mdestination�[0m > 2023-12-14 18:46:37 �[32mINFO�[m i.a.c.i.d.b.BufferManager(printQueueInfo):118 - [ASYNC QUEUE INFO] Global: max: 296.97 MB, allocated: 10 MB (10.0 MB), % used: 0.03367286815141938 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.000000
2023-12-14 18:46:37 �[43mdestination�[0m > 2023-12-14 18:46:37 �[32mINFO�[m i.a.c.i.d.FlushWorkers(printWorkerInfo):146 - [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2023-12-14 18:46:42 �[46mreplication-orchestrator�[0m > Attempt 3 to update stream status incomplete null:GET_AFN_INVENTORY_DATA
2023-12-14 18:46:42 �[46mreplication-orchestrator�[0m > Attempt 3 to update stream status incomplete null:GET_AFN_INVENTORY_DATA error: io.airbyte.api.client.invoker.generated.ApiException: updateStreamStatus call failed with: 400 - {"message":"Incomplete run cause must be set for runs that stopped in an incomplete state.","exceptionClassName":"io.airbyte.server.apis.StreamStatusesApiController$Validations$2","exceptionStack":["io.airbyte.server.apis.StreamStatusesApiController$Validations$2: Incomplete run cause must be set for runs that stopped in an incomplete state.","\tat io.airbyte.server.apis.StreamStatusesApiController$Validations.validate(StreamStatusesApiController.java:97)","\tat io.airbyte.server.apis.StreamStatusesApiController.updateStreamStatus(StreamStatusesApiController.java:58)","\tat io.airbyte.server.apis.$StreamStatusesApiController$Definition$Exec.dispatch(Unknown Source)","\tat io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:371)","\tat io.micronaut.context.DefaultBeanContext$4.invoke(DefaultBeanContext.java:594)","\tat io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:303)","\tat io.micronaut.web.router.RouteMatch.execute(RouteMatch.java:111)","\tat io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:103)","\tat io.micronaut.http.server.RouteExecutor.lambda$executeRoute$14(RouteExecutor.java:659)","\tat reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)","\tat reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:62)","\tat reactor.core.publisher.FluxSubscribeOn$SubscribeOnSubscriber.run(FluxSubscribeOn.java:194)","\tat io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)","\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)","\tat io.micronaut.scheduling.instrument.InvocationInstrumenterWrappedCallable.call(InvocationInstrumenterWrappedCallable.java:53)","\tat io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:129)","\tat io.micrometer.core.instrument.Timer.lambda$wrap$1(Timer.java:206)","\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)","\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)","\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)","\tat java.base/java.lang.Thread.run(Thread.java:1589)"]}
2023-12-14 18:46:42 �[46mreplication-orchestrator�[0m > Unable to update status for stream null:GET_AFN_INVENTORY_DATA (id = 452c48c8-901b-42b8-8450-705ba804695a, origin = SOURCE, context = ReplicationContext[isReset=false, connectionId=ea7d52ae-c493-4dbb-a9ca-44591aef6431, sourceId=09a35710-3857-47e0-b6b9-f24c51ea65c5, destinationId=b5075bb5-3515-4342-85fb-7d27d0fc2eef, jobId=6678028, attempt=0, workspaceId=5da44738-9b16-4047-933c-907bcb368c05])
2023-12-14 18:46:42 �[44msource�[0m > Finished syncing GET_AFN_INVENTORY_DATA
2023-12-14 18:46:42 �[44msource�[0m > SourceAmazonSellerPartner runtimes:
Syncing stream GET_AFN_INVENTORY_DATA 0:02:02.888661
2023-12-14 18:46:42 �[44msource�[0m > None
Traceback (most recent call last):
  File "/airbyte/integration_code/main.py", line 16, in <module>
    launch(source, sys.argv[1:])
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/entrypoint.py", line 210, in launch
    for message in source_entrypoint.run(parsed_args):
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/entrypoint.py", line 117, in run
    yield from map(AirbyteEntrypoint.airbyte_message_to_string, self.read(source_spec, config, config_catalog, state))
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/entrypoint.py", line 159, in read
    yield from self.source.read(self.logger, config, catalog, state)
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py", line 132, in read
    raise e
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py", line 121, in read
    yield from self._read_stream(
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py", line 193, in _read_stream
    for record in record_iterator:
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py", line 253, in _read_full_refresh
    for record_data_or_message in stream_instance.read_full_refresh(configured_stream.cursor_field, logger, self._slice_logger):
  File "/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/streams/core.py", line 121, in read_full_refresh
    yield from self.read_records(
  File "/airbyte/integration_code/source_amazon_seller_partner/streams.py", line 349, in read_records
    raise AirbyteTracedException(message=f"The report for stream '{self.name}' was not created - skip reading")
airbyte_cdk.utils.traced_exception.AirbyteTracedException: None
2023-12-14 18:46:42 �[46mreplication-orchestrator�[0m > (pod: jobs / source-amazon-seller-partner-read-6678028-0-mgvrb) - Closed all resources for pod
2023-12-14 18:46:42 �[46mreplication-orchestrator�[0m > Total records read: 3 (0 bytes)
2023-12-14 18:46:42 �[46mreplication-orchestrator�[0m > Schema validation was performed to a max of 10 records with errors per stream.
2023-12-14 18:46:42 �[46mreplication-orchestrator�[0m > Attempt 0 to update stream status incomplete null:GET_AFN_INVENTORY_DATA
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.c.i.d.FlushWorkers(close):191 - Closing flush workers -- waiting for all buffers to flush
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.c.i.d.FlushWorkers(close):220 - Closing flush workers -- all buffers flushed
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.c.i.d.GlobalMemoryManager(free):88 - Freeing 0 bytes..
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.c.i.d.FlushWorkers(close):226 - Closing flush workers -- Supervisor shutdown status: true
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.c.i.d.FlushWorkers(close):228 - Closing flush workers -- Starting worker pool shutdown..
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.c.i.d.FlushWorkers(close):231 - Closing flush workers -- Workers shutdown status: true
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.c.i.d.b.BufferManager(close):92 - Buffers cleared..
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.i.b.d.t.DefaultTyperDeduper(typeAndDedupe):247 - Typing and deduping all tables
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.i.b.d.t.DefaultTyperDeduper(lambda$typeAndDedupe$4):264 - Skipping typing and deduping for stream airbyte_sync.AMZ_SM_GET_AFN_INVENTORY_DATA because it had no records during this sync and no unprocessed records from a previous sync.
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.i.b.d.t.DefaultTyperDeduper(lambda$typeAndDedupe$4):264 - Skipping typing and deduping for stream airbyte_sync.AMZ_SM_Orders because it had no records during this sync and no unprocessed records from a previous sync.
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.i.b.d.t.DefaultTyperDeduper(lambda$typeAndDedupe$4):264 - Skipping typing and deduping for stream airbyte_sync.AMZ_SM_OrderItems because it had no records during this sync and no unprocessed records from a previous sync.
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.i.d.b.BigQueryStagingConsumerFactory(lambda$onCloseFunction$4):179 - Cleaning up destination started for 3 streams
2023-12-14 18:46:42 �[43mdestination�[0m > 2023-12-14 18:46:42 �[32mINFO�[m i.a.i.d.b.BigQueryGcsOperations(dropStageIfExists):186 - Cleaning up staging path for stream AMZ_SM_OrderItems (dataset airbyte_internal): lgp_airbyte_sync/data/airbyte_internal_AMZ_SM_OrderItems
2023-12-14 18:46:43 �[43mdestination�[0m > 2023-12-14 18:46:43 �[32mINFO�[m i.a.i.d.b.BigQueryGcsOperations(dropStageIfExists):186 - Cleaning up staging path for stream AMZ_SM_GET_AFN_INVENTORY_DATA (dataset airbyte_internal): lgp_airbyte_sync/data/airbyte_internal_AMZ_SM_GET_AFN_INVENTORY_DATA
2023-12-14 18:46:43 �[43mdestination�[0m > 2023-12-14 18:46:43 �[32mINFO�[m i.a.i.d.b.BigQueryGcsOperations(dropStageIfExists):186 - Cleaning up staging path for stream AMZ_SM_Orders (dataset airbyte_internal): lgp_airbyte_sync/data/airbyte_internal_AMZ_SM_Orders
2023-12-14 18:46:43 �[43mdestination�[0m > 2023-12-14 18:46:43 �[32mINFO�[m i.a.i.b.d.t.DefaultTyperDeduper(commitFinalTables):282 - Committing final tables
2023-12-14 18:46:43 �[43mdestination�[0m > 2023-12-14 18:46:43 �[32mINFO�[m i.a.i.b.d.t.DefaultTyperDeduper(cleanup):323 - Cleaning Up type-and-dedupe thread pool
2023-12-14 18:46:43 �[43mdestination�[0m > 2023-12-14 18:46:43 �[32mINFO�[m i.a.i.d.b.BigQueryStagingConsumerFactory(lambda$onCloseFunction$4):185 - Cleaning up destination completed.
2023-12-14 18:46:43 �[43mdestination�[0m > 2023-12-14 18:46:43 �[32mINFO�[m i.a.c.i.d.AsyncStreamConsumer(close):219 - class io.airbyte.cdk.integrations.destination_async.AsyncStreamConsumer closed
2023-12-14 18:46:43 �[43mdestination�[0m > 2023-12-14 18:46:43 �[32mINFO�[m i.a.c.i.b.IntegrationRunner(runInternal):231 - Completed integration: io.airbyte.integrations.destination.bigquery.BigQueryDestination
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > (pod: jobs / destination-bigquery-write-6678028-0-isaxz) - Closed all resources for pod
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > thread status... timeout thread: false , replication thread: true
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > Sync worker failed.
java.util.concurrent.ExecutionException: io.airbyte.workers.internal.exception.SourceException: Source process exited with non-zero exit code 1
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) ~[?:?]
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) ~[?:?]
	at io.airbyte.workers.general.DefaultReplicationWorker.replicate(DefaultReplicationWorker.java:213) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
	at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:143) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
	at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:63) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
	at io.airbyte.container_orchestrator.orchestrator.ReplicationJobOrchestrator.runJob(ReplicationJobOrchestrator.java:117) ~[io.airbyte-airbyte-container-orchestrator-dev-2775b7fb77.jar:?]
	at io.airbyte.container_orchestrator.Application.run(Application.java:78) ~[io.airbyte-airbyte-container-orchestrator-dev-2775b7fb77.jar:?]
	at io.airbyte.container_orchestrator.Application.main(Application.java:38) ~[io.airbyte-airbyte-container-orchestrator-dev-2775b7fb77.jar:?]
	Suppressed: io.airbyte.workers.exception.WorkerException: Source process exit with code 1. This warning is normal if the job was cancelled.
		at io.airbyte.workers.internal.DefaultAirbyteSource.close(DefaultAirbyteSource.java:158) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
		at io.airbyte.workers.general.DefaultReplicationWorker.replicate(DefaultReplicationWorker.java:161) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
		at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:143) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
		at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:63) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
		at io.airbyte.container_orchestrator.orchestrator.ReplicationJobOrchestrator.runJob(ReplicationJobOrchestrator.java:117) ~[io.airbyte-airbyte-container-orchestrator-dev-2775b7fb77.jar:?]
		at io.airbyte.container_orchestrator.Application.run(Application.java:78) ~[io.airbyte-airbyte-container-orchestrator-dev-2775b7fb77.jar:?]
		at io.airbyte.container_orchestrator.Application.main(Application.java:38) ~[io.airbyte-airbyte-container-orchestrator-dev-2775b7fb77.jar:?]
Caused by: io.airbyte.workers.internal.exception.SourceException: Source process exited with non-zero exit code 1
	at io.airbyte.workers.general.DefaultReplicationWorker.lambda$readFromSrcAndWriteToDstRunnable$8(DefaultReplicationWorker.java:382) ~[io.airbyte-airbyte-commons-worker-dev-2775b7fb77.jar:?]
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
	at java.lang.Thread.run(Thread.java:1589) ~[?:?]
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > sync summary: {
  "status" : "failed",
  "startTime" : 1702578865772,
  "endTime" : 1702579603482,
  "totalStats" : {
    "bytesEmitted" : 0,
    "destinationStateMessagesEmitted" : 0,
    "destinationWriteEndTime" : 1702579603194,
    "destinationWriteStartTime" : 1702578865789,
    "meanSecondsBeforeSourceStateMessageEmitted" : 0,
    "maxSecondsBeforeSourceStateMessageEmitted" : 0,
    "meanSecondsBetweenStateMessageEmittedandCommitted" : 0,
    "recordsEmitted" : 0,
    "replicationEndTime" : 0,
    "replicationStartTime" : 1702578865772,
    "sourceReadEndTime" : 1702579602375,
    "sourceReadStartTime" : 1702578872355,
    "sourceStateMessagesEmitted" : 0
  },
  "streamStats" : [ ]
}
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > failures: [ {
  "failureOrigin" : "source",
  "failureType" : "system_error",
  "externalMessage" : "The report for stream 'GET_AFN_INVENTORY_DATA' was not created - skip reading",
  "metadata" : {
    "attemptNumber" : 0,
    "jobId" : 6678028,
    "from_trace_message" : true,
    "connector_command" : "read"
  },
  "stacktrace" : "Traceback (most recent call last):\n  File \"/airbyte/integration_code/main.py\", line 16, in <module>\n    launch(source, sys.argv[1:])\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/entrypoint.py\", line 210, in launch\n    for message in source_entrypoint.run(parsed_args):\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/entrypoint.py\", line 117, in run\n    yield from map(AirbyteEntrypoint.airbyte_message_to_string, self.read(source_spec, config, config_catalog, state))\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/entrypoint.py\", line 159, in read\n    yield from self.source.read(self.logger, config, catalog, state)\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py\", line 132, in read\n    raise e\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py\", line 121, in read\n    yield from self._read_stream(\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py\", line 193, in _read_stream\n    for record in record_iterator:\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/abstract_source.py\", line 253, in _read_full_refresh\n    for record_data_or_message in stream_instance.read_full_refresh(configured_stream.cursor_field, logger, self._slice_logger):\n  File \"/usr/local/lib/python3.9/site-packages/airbyte_cdk/sources/streams/core.py\", line 121, in read_full_refresh\n    yield from self.read_records(\n  File \"/airbyte/integration_code/source_amazon_seller_partner/streams.py\", line 349, in read_records\n    raise AirbyteTracedException(message=f\"The report for stream '{self.name}' was not created - skip reading\")\nairbyte_cdk.utils.traced_exception.AirbyteTracedException: None\n",
  "timestamp" : 1702579000174
}, {
  "failureOrigin" : "source",
  "internalMessage" : "Source process exited with non-zero exit code 1",
  "externalMessage" : "Something went wrong within the source connector",
  "metadata" : {
    "attemptNumber" : 0,
    "jobId" : 6678028,
    "connector_command" : "read"
  },
  "stacktrace" : "io.airbyte.workers.internal.exception.SourceException: Source process exited with non-zero exit code 1\n\tat io.airbyte.workers.general.DefaultReplicationWorker.lambda$readFromSrcAndWriteToDstRunnable$8(DefaultReplicationWorker.java:382)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1589)\n",
  "timestamp" : 1702579602383
} ]
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > Returning output...
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > 
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > ----- END REPLICATION -----
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > 
2023-12-14 18:46:43 �[46mreplication-orchestrator�[0m > Writing async status SUCCEEDED for KubePodInfo[namespace=jobs, name=orchestrator-repl-job-6678028-attempt-0, mainContainerInfo=KubeContainerInfo[image=airbyte/container-orchestrator:dev-2775b7fb77, pullPolicy=IfNotPresent]]...
2023-12-14 18:46:44 �[46mplatform�[0m > Retry State: RetryManager(completeFailureBackoffPolicy=BackoffPolicy(minInterval=PT10S, maxInterval=PT30M, base=3), partialFailureBackoffPolicy=null, successiveCompleteFailureLimit=5, totalCompleteFailureLimit=5, successivePartialFailureLimit=1000, totalPartialFailureLimit=10, successiveCompleteFailures=1, totalCompleteFailures=1, successivePartialFailures=0, totalPartialFailures=0)
 Backoff before next attempt: 10 seconds
2023-12-14 18:46:43 �[46mplatform�[0m > State Store reports orchestrator pod orchestrator-repl-job-6678028-attempt-0 succeeded


>> ATTEMPT 2/2

2023-12-14 18:59:58 �[46mreplication-orchestrator�[0m > sync summary: {
  "status" : "completed",
  "recordsSynced" : 141,
  
2023-12-14 18:59:58 �[46mreplication-orchestrator�[0m > Writing async status SUCCEEDED for KubePodInfo[namespace=jobs, name=orchestrator-repl-job-6678028-attempt-1, mainContainerInfo=KubeContainerInfo[image=airbyte/container-orchestrator:dev-2775b7fb77, pullPolicy=IfNotPresent]]...
2023-12-14 19:00:01 �[46mplatform�[0m > State Store reports orchestrator pod orchestrator-repl-job-6678028-attempt-1 succeeded

Contribute

  • Yes, I want to contribute
@askarpets
Copy link
Contributor

Hello @LGPCroeder,
This error occurs due to the issue on Amazon Seller Partner side, however, we made some changes to reduce the number of requests to their API. Could you please upgrade your connector to version 3.2.2 and check if it helps?
Thanks!

@LGPCroeder
Copy link
Author

LGPCroeder commented Feb 8, 2024 via email

@LGPCroeder
Copy link
Author

@askarpets
Copy link
Contributor

@LGPCroeder could you please enable incremental sync mode for this stream in order to write records to the destination after each slice is read? This will allow you to write the data which has already been read and sync the rest on next incremental reads?

@LGPCroeder
Copy link
Author

@askarpets What would the primary key be (dataEndTime)?

@askarpets
Copy link
Contributor

@LGPCroeder I'm not sure if the primary key could be defined for this stream, but for incremental sync you should use dataEndTime as cursor_field

@askarpets
Copy link
Contributor

Closing for now. Please reopen if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants