Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BLOB STORAGE: Pooled connection observed an error reactor.netty.http.client.HttpClientOperations$PrematureCloseException: Connection prematurely closed BEFORE response #5180

Closed
2 tasks done
vmaheshw opened this issue Aug 30, 2019 · 22 comments
Assignees
Labels
Azure.Core azure-core Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization.

Comments

@vmaheshw
Copy link

vmaheshw commented Aug 30, 2019

Is your feature request related to a problem? Please describe.
Query/Question
There are no examples for Async api. I'm currently more interested in BlockBlob stageBlock and flux with replayable.

I'm seeing this in my code.

java.lang.IllegalStateException: The request failed because the size of the contents of the provided Flux did not match the provided data size upon attempting to retry. This is likely caused by the Flux not being replayable. To support retries, all Fluxes must produce the same data for each subscriber. Please ensure this behavior.

Code Snippet:
byte[] byteArray;
ByteBuf buf = Unpooled.wrappedBuffer(byteArray, 0, blockSize);
_blobClient.stageBlock(blockId, Flux.just(buf), blockSize).block();

Describe the solution you'd like
There should be sample examples for async api.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Information Checklist
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report

  • Description Added
  • Expected solution specified
@rickle-msft
Copy link
Contributor

Hi, @vmaheshw. We will be working to add more samples before we GA. We are aware that they need to be filled out more, and it's just a matter of balancing feature work with samples. But like I said it'll be more complete before GA.

Assuming your blockSize is the same as the size of your byteArray, I'm a bit surprised you're hitting this exception. Are you able to reproduce this consistently? And are you able to share more of a stack trace?

@vmaheshw
Copy link
Author

I saw this in different path this time.
2019/08/30 18:22:19.428 ERROR [PooledConnectionProvider] Pooled connection observed an error reactor.netty.http.client.HttpClientOperations$PrematureCloseException: Connection prematurely closed BEFORE response
2019/08/30 18:22:19.434 WARN [stageBlock] <-- HTTP FAILED:
2019/08/30 18:22:19.434 INFO [stageBlock] --> PUT
2019/08/30 18:22:23.708 ERROR [PooledConnectionProvider] Pooled connection observed an error io.netty.util.IllegalReferenceCountException: refCnt: 0
at io.netty.buffer.AbstractByteBuf.ensureAccessible(AbstractByteBuf.java:1464) ~[netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.buffer.AbstractByteBuf.duplicate(AbstractByteBuf.java:1207) ~[netty-all-4.1.38.Final.jar:4.1.38.Final]
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:107) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.FluxJust$WeakScalarSubscription.request(FluxJust.java:99) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:162) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:162) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.netty.channel.ChannelOperationsHandler$PublisherSender.onSubscribe(ChannelOperationsHandler.java:715) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:90) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:90) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.FluxJust.subscribe(FluxJust.java:70) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.FluxMapFuseable.subscribe(FluxMapFuseable.java:63) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.FluxMapFuseable.subscribe(FluxMapFuseable.java:63) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.netty.FutureMono$1.subscribe(FutureMono.java:142) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.core.publisher.Flux.subscribe(Flux.java:7799) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.netty.channel.ChannelOperationsHandler.drain(ChannelOperationsHandler.java:460) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.netty.channel.ChannelOperationsHandler.flush(ChannelOperationsHandler.java:194) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:789) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:757) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1031) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:298) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at reactor.netty.FutureMono$DeferredWriteMono.subscribe(FutureMono.java:330) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:153) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.Mono.subscribe(Mono.java:3710) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.netty.NettyOutbound.subscribe(NettyOutbound.java:317) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.core.publisher.MonoSource.subscribe(MonoSource.java:51) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) [reactor-core-3.2.9.RELEASE.jar:3.2.9.RELEASE]
at reactor.netty.http.client.HttpClientConnect$HttpObserver.onStateChange(HttpClientConnect.java:397) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.netty.resources.PooledConnectionProvider$DisposableAcquire.onStateChange(PooledConnectionProvider.java:501) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.netty.resources.PooledConnectionProvider$PooledConnection.onStateChange(PooledConnectionProvider.java:443) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.netty.resources.PooledConnectionProvider$DisposableAcquire.onStateChange(PooledConnectionProvider.java:501) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.netty.resources.PooledConnectionProvider$PooledConnection.onStateChange(PooledConnectionProvider.java:443) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at reactor.netty.channel.ChannelOperationsHandler.channelActive(ChannelOperationsHandler.java:112) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:225) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:211) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:204) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelActive(CombinedChannelDuplexHandler.java:414) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelActive(ChannelInboundHandlerAdapter.java:69) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelActive(CombinedChannelDuplexHandler.java:213) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:225) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:211) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:204) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at reactor.netty.tcp.SslProvider$SslReadHandler.userEventTriggered(SslProvider.java:724) [reactor-netty-0.8.3.RELEASE.jar:0.8.3.RELEASE]
at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:341) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:327) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:319) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.ssl.SslHandler.setHandshakeSuccess(SslHandler.java:1745) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1409) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1224) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1271) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:505) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.handler.proxy.ProxyHandler.channelRead(ProxyHandler.java:255) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:255) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:794) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:424) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:326) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.38.Final.jar:4.1.38.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
2019/08/30 18:22:23.711 WARN [stageBlock] <-- HTTP FAILED:
2019/08/30 18:22:23.714 ERROR [] The request failed because the size of the contents of the provided Flux did not match the provided data size upon attempting to retry. This is likely caused by the Flux not being replayable. To support retries, all Fluxes must produce the same data for each subscriber. Please ensure this behavior.

Basically, the sample code I pasted above is not working with internal retry logic when some network related issue occurs.

So two things to investigate:

  1. Why is the connection closing prematurely?
  2. Why is the retry not working?

@vmaheshw
Copy link
Author

@rickle-msft : Should I create another BUG ticket for the crash, to track Async example and the Exception separately.

@rickle-msft
Copy link
Contributor

Sure. It might be better to rename this one for the crash since there's already context here and then open a new on for the examples and just reference this issue since we've already acknowledged that ask

@vmaheshw vmaheshw changed the title [QUERY]Add Examples for Async Storage Blob APIs BLOB STORAGE: Pooled connection observed an error reactor.netty.http.client.HttpClientOperations$PrematureCloseException: Connection prematurely closed BEFORE response Aug 30, 2019
@rickle-msft
Copy link
Contributor

@vmaheshw I've run the code snippet you gave me locally and forced an IO error by unplugging my network connection. I also reran our test for retrying network errors from the commit for this release, and in both cases it behaved as expected.

I have a suspicion that this line is actually underlying most of what's going on:
Pooled connection observed an error io.netty.util.IllegalReferenceCountException: refCnt: 0

IllegalReferenceCountException extends IllegalStateException, which is what we check for in the retry policy to give that extended error message, and we don't retry an IllegalStateException, which explains why you're seeing it fail so quickly. I also think that would explain why the connection gets closed prematurely--we're failing to read from your ByteBuf so we cancel the operation.

The next question, then, is why your ByteBuf has a refCount of 0. Could you try setting a watch point on that value for your ByteBuf object and see where it's getting decremented?

@vmaheshw
Copy link
Author

vmaheshw commented Sep 3, 2019

@rickle-msft:
I have a log before stageBlock to show the length of the block size and I already have a non-zero length check.

2019/08/30 18:21:54.198 INFO com.azure.storage.blob.BlockBlobAsyncClient@ Upload block start for blob: 00057 for block size:10524791.

_blobClient.stageBlock(blockIdEncoded, Flux.just(buf), blockSize).block();
After this we call stageblock.block call. One more point, This is not happening every time. It happened after 12 hour run.

@kaerm kaerm added Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. Storage Storage Service (Queues, Blobs, Files) labels Sep 3, 2019
@triage-new-issues triage-new-issues bot removed the triage label Sep 3, 2019
@rickle-msft
Copy link
Contributor

@vmaheshw Sorry for not being clear. I wasn't suggesting that the length of the block is zero. ByteBuf's are explicitly reference counted and will throw this exception if somehow someone tries to read from the buffer after the reference count has been decremented to zero because it's effectively been marked for deallocation. I was asking if you'd be able to set a watch point on this reference count value to determine when this value is changing (probably somewhere internal to the sdk).

Something worth noting here is that in preview 3, which will be out in about a week I think, we are changing this api signature to accept ByteBuffer instead of ByteBuf, so hopefully that will fix this issue for free if you'd rather wait for that next preview instead of spending more cycles debugging this now.

@Jianghao It's interesting that this is happening only after a 12 hour run. Did you ever encounter anything like this in your stress testing? (TLDR it seems that something is trying to read a ByteBuf with refCount 0 and throwing).

@kurtzeborn
Copy link
Member

Assigning to @jianghaolu as this appears to be azure core related.

CC: @alzimmermsft

@alzimmermsft alzimmermsft added Azure.Core azure-core and removed Storage Storage Service (Queues, Blobs, Files) labels Sep 26, 2019
@JonathanGiles
Copy link
Member

Can this be validated against the latest azure-core that brings in the updated Reactor / Netty dependencies?

@anuchandy
Copy link
Member

I think there is an issue with below code:

byte[] byteArray;
ByteBuf buf = Unpooled.wrappedBuffer(byteArray, 0, blockSize);
_blobClient.stageBlock(blockId, Flux.just(buf), blockSize).block();

My gut feeling is - though the provided Flux.just(buf) is re-playable, the replay emits invalid ByteBuf. The first subscription to the Flux by the underlying http-client (reactor-netty) releases the emitted ByteBuf after writing it to the channel/socket. Such release reduces ref_count by 1. If the code path requires resubscribing to the same Flux then it emits already released ByteBuf i.e. the one with ref_count 0. Solution is to ensure each subscription to Flux gets valid ByteBuf, i.e. ensure it's really a valid repayable publisher. The just operator alone cannot guarantee this, it has to be combined with Defer:

Flux<ByteBuf> inputFlux = Flux.defer(() -> {
    byte[] byteArray = // init byte[];
    ByteBuf buf = Unpooled.wrappedBuffer(byteArray, 0, blockSize);
    return Flux.just(buf);
});
_blobClient.stageBlock(blockId, inputFlux, blockSize).block();

This way using defer we ensure per subscriber state is valid.

@anuchandy
Copy link
Member

Ok, I can see that the storage blob RequestRetryPolicy is trying to help here a bit so that consumer is not forced to write above defer like logic. The following code in preview2 RequestRetryPolicy:

Flux<ByteBuf> bufferedBody = (context.httpRequest().body() == null) ? null : context.httpRequest().body().map(ByteBuf::duplicate);

Flux<ByteBuf> bufferedBody = 
    (context.httpRequest().body() == null) 
     ? null 
     : context.httpRequest().body().map(ByteBuf::duplicate); 

indeed call ByteBuf::duplicate to produce a ByteBuf that shares underlying array. This code seems the root cause of the bug, duplicate will not increase the ref_count, instead it should be ByteBuf::retain which increase the ref_count and it must be followed by ByteBuf::release in doFinally to reduce the ref_count.

I guess, with preview2 above proposed defer will workaround this bug.

@anuchandy
Copy link
Member

anuchandy commented Oct 2, 2019

Merging the info from my last two comments simplifying them to simple flow. I think below is possible flow causing the error:

  1. User provided a Flux emitting bb1 (this has ref_count 1).
  2. RetryPolicy policy duplicated bb1 to produce bb2. bb2 is simply a reference to underlying array owned by bb1 and has the same ref_count of 1. bb2 is given to httpClient.
  3. HttpClient writes bb2 to wire and reduces ref_count to 0, now both bb1 and bb2 has ref_count 0 (behavior of duplicate).
  4. Write operation in step3 failed (for some reason, e.g. network err) hence control go back to RetryPolicy.
  5. RetryPolicy resubscribed to user provided flux that again emits bb1 which ref_count 0.
  6. RetryPolicy duplicated bb1 to produce bb3, both bb3 and bb1 has ref_count 0. bb3 is given to httpClient.
  7. HttpClient sees that bb3 has ref_count 0 and throws IllegalRefCountException.
  8. As Rick mentioned RetryPolicy re throw this as IllegalStateException.
  9. The Channel is closed due to this with PrematureCloseException.

@anuchandy
Copy link
Member

@JonathanGiles I will double check the code to see whether current ByteBuffer based core/storage has similar flaw in different places.

@JonathanGiles
Copy link
Member

Thanks @anuchandy

@rickle-msft
Copy link
Contributor

@anuchandy Thanks for investigating this. I suspect that since we moved back to accepting ByteBuffers in preview 3, the logic in the retry policy (duplicate) should work now, right?

@anuchandy
Copy link
Member

@rickle-msft right, with ByteBuffer in preview-3, usage of the duplicate operation in RetryPolicy is valid.

@vmaheshw
Copy link
Author

vmaheshw commented Oct 9, 2019

@anuchandy @rickle-msft Are there unit test cases to verify that RetryPolicy is working fine? I'm not able to figure out a way to enforce RetryPolicy other than wait for error from storage account.

@rickle-msft
Copy link
Contributor

@vmaheshw There are unit tests for the RetryPolicy here.

I'm not sure I understood the second part of your question, though. What are you trying to achieve? And what do you mean by "enforce RetryPolicy".

If you are still seeing this error after switching to preview3, we'd love to get more information on that.

@vmaheshw
Copy link
Author

vmaheshw commented Oct 10, 2019

@rickle-msft I want to be sure that RetryPolicy is working as described. I could not find a way to simulate failure which will automatically enforce the default Retry.
Is there any log/trace message that I can find in my run if Retry was tried. In 10.5V there was a trace message telling Request Number and I could look for "==> OUTGOING REQUEST (Try number='1')". This is not present in V12. So, I'm not confident if this(#5180) issue is fixed or not.

@rickle-msft
Copy link
Contributor

@vmaheshw I see. You should be run the tests I pointed you to verify the correct behavior. As for logging the try number, I don't think there is any information on that in the v12 logs right now, but I have included this request in the issue that discusses logging improvements for preview 5 (#4328)

@rickle-msft
Copy link
Contributor

I am going to close this issue as I believe this issue was addressed with the switch to accepting ByteBuffers, and we have not heard anything to the contrary from the customer. Please feel free to reopen or post again if you continue to his this or similar issues.

@ghost
Copy link

ghost commented Nov 6, 2019

Thanks for working with Microsoft on GitHub! Tell us how you feel about your experience using the reactions on this comment.

@github-actions github-actions bot locked and limited conversation to collaborators Apr 12, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Azure.Core azure-core Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization.
Projects
None yet
Development

No branches or pull requests

9 participants