Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Table Storage does not allow empty string PartitionKey or RowKey values #25020

Closed
3 tasks done
VincentBaal opened this issue Oct 25, 2021 · 1 comment · Fixed by #31903
Closed
3 tasks done

[BUG] Table Storage does not allow empty string PartitionKey or RowKey values #25020

VincentBaal opened this issue Oct 25, 2021 · 1 comment · Fixed by #31903
Assignees
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Tables

Comments

@VincentBaal
Copy link

VincentBaal commented Oct 25, 2021

The azure table storage SDK for java does not allow empty strings for PartitionKey or RowKey. This article states empty strings should be permitted: https://docs.microsoft.com/en-us/rest/api/storageservices/Designing-a-Scalable-Partitioning-Strategy-for-Azure-Table-Storage
They were also permitted in the legacy storage sdk.

Stacktrace
java.lang.IllegalArgumentException: 'RowKey' must be a non-empty String. at com.azure.data.tables.models.TableEntity.validateProperty(TableEntity.java:153) at com.azure.data.tables.models.TableEntity.setProperties(TableEntity.java:130) at com.azure.data.tables.implementation.ModelHelper.createEntity(ModelHelper.java:120) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at com.azure.data.tables.TableAsyncClient.lambda$listEntities$26(TableAsyncClient.java:789) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249) at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:73) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoCallable.subscribe(MonoCallable.java:61) at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157) at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:73) at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2397) at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2193) at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2067) at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54) at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337) at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151) at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2397) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110) at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54) at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143) at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157) at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107) at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151) at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151) at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99) at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:173) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79) at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2663) at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180) at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260) at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150) at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816) at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159) at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145) at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:212) at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269) at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:401) at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:416) at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:470) at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:685) at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:94) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1368) at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1245) at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1282) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.kqueue.AbstractKQueueStreamChannel$KQueueStreamUnsafe.readReady(AbstractKQueueStreamChannel.java:544) at io.netty.channel.kqueue.AbstractKQueueChannel$AbstractKQueueUnsafe.readReady(AbstractKQueueChannel.java:382) at io.netty.channel.kqueue.KQueueEventLoop.processReady(KQueueEventLoop.java:211) at io.netty.channel.kqueue.KQueueEventLoop.run(KQueueEventLoop.java:289) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:829)

To Reproduce
To reproduce you can add an entity with an empty rowkey to a table using the Microsoft Azure Storage Explorer and try to retrieve it using the table storage SDK.

Code Snippet
The issue is caused by the following check in TableEntity.java line 151
private void validateProperty(String key, Object value) { Objects.requireNonNull(key, "'key' cannot be null."); if ((TablesConstants.PARTITION_KEY.equals(key) || TablesConstants.ROW_KEY.equals(key)) && value != null && (!(value instanceof String) || ((String) value).isEmpty())) { throw logger.logExceptionAsError(new IllegalArgumentException(String.format( "'%s' must be a non-empty String.", key))); } if (TablesConstants.TIMESTAMP_KEY.equals(key) && value != null && !(value instanceof OffsetDateTime)) { throw logger.logExceptionAsError(new IllegalArgumentException(String.format( "'%s' must be an OffsetDateTime.", key))); } if ((TablesConstants.ODATA_ETAG_KEY.equals(key) || TablesConstants.ODATA_EDIT_LINK_KEY.equals(key) || TablesConstants.ODATA_ID_KEY.equals(key) || TablesConstants.ODATA_TYPE_KEY.equals(key)) && value != null && !(value instanceof String)) { throw logger.logExceptionAsError(new IllegalArgumentException(String.format( "'%s' must be a String.", key))); } }

Expected behavior
Expected behavior is retrieval of the TableEntity containing the empty RowKey without errors.

Setup

  • Library/Libraries: com.azure:azure-data-tables:12.1.3
  • Java version: 11
  • App Server/Environment: Tomcat
  • Frameworks: Jersey

Additional context
A similar problem is mentioned in a issue from 2012 here: #62

Information Checklist
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report

  • Bug Description Added
  • Repro Steps Added
  • Setup information Added
@ghost ghost added needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. customer-reported Issues that are reported by GitHub users external to the Azure organization. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that labels Oct 25, 2021
@joshfree joshfree changed the title [BUG] Table Storrage does not allow empty string PartitionKey or RowKey values [BUG] Table Storage does not allow empty string PartitionKey or RowKey values Oct 25, 2021
@joshfree joshfree added Client This issue points to a problem in the data-plane of the library. Tables labels Oct 25, 2021
@ghost ghost removed the needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. label Oct 25, 2021
@joshfree
Copy link
Member

@vcolin7 could you please take a look?

@jairmyree jairmyree self-assigned this Sep 2, 2022
@vcolin7 vcolin7 removed their assignment Oct 12, 2022
jairmyree added a commit that referenced this issue Mar 15, 2023
…" as partition or row keys (#31903)

* Remove logic that disallowed empty string as primary key

* Allow for the creation of entities without primary key

* Adding Recordings for new Tests

* Updated CHANGELOG.md for PR

* Update CHANGELOG.md

* Removed unused import
@github-actions github-actions bot locked and limited conversation to collaborators Jun 13, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Tables
Projects
None yet
4 participants