From 4aecdc2d7f76dc36bb43125d4af780a73ec8d0d1 Mon Sep 17 00:00:00 2001 From: Marek Dopiera Date: Wed, 11 May 2022 16:43:04 +0200 Subject: [PATCH] [mirroring client] Catch-up handover review with missing features and review comments (#3347) * chore: revert review comments * feat: add MirroringOperationException exception markers (#125) * feat: concurrent writes in MirroringBufferedMutator (#80) * refactor: implement multiple argument operations on MirroringAsyncTable with specific operations rather than batch() (#75) * feat: implement MirroringAsyncTable#getName() (#132) * feat: use Logger rather than stdout in DefaultMismatchDetector (#128) * feat: synchronous writes (#88) * fix: implement heapSize method for RowCell (#111) * feat: FlowController accounts for memory usage (#137) * refactor: remove Configuration as a base of MirroringConfiguration (#127) * feat: MirroringAsyncBufferedMutator (#81) * refactor: rename WRITE_MISMATCH to SECONDARY_WRITE_ERROR (#138) * fix: BufferedMutator close() waits for all secondary flushes to finish (#110) * feat: 2.x reads sampling (#114) * refactor: make MirroringResultScanner synchronize on itself rather than MirroringTable (#134) * ConcurrentBufferedMutator integration tests (#135) * feat: add synchronous MirroringConnection to 2.x (#109) * fix: MirroringConnection in 2.x failed to compile (#139) * fix: fix BufferedMutator ITs (#140) * feat: run 1.x integration tests on MirroringConnection etc. from 2.x (#108) * feat: 2.x - rewrite Increment and Append as Put in batch (#116) * fix: fix build (#142) * refactor: minor fixes after review (#117) * feat: MirroringAsyncTable#getScanner() (#58) * test: 2.x integration tests (#112) * feat: implement MirroringAsyncBufferedMutatorBuilder (#144) * feat: log rows and values in DefaultMismatchDetector (#129) * fix: ITs - add expected parameter to MismatchDetectors (#153) * fix: force Append and Increment to return results and discard that result before returning it to user (#136) * fix: review fixes in utils * fix: review fixes in BufferedMutator * fix: review fixes in Faillog * fix: fixed reference counting * fix: review fixes in FlowController * fix: review fixes in metrics * fix: review fixes in verification * fix: Review fixes in MirroringTable * fix: review fixes in HBase 2.x client * fix: fixes in ITs * feat: MirrorinAsyncTable: scan(), scanAll() (#131) * fix: review fixes in tests * feat: MirroringConnection: timeout in close() and abort() (#133) * feat: better mismatch detection of scan results (#130) * feat: quickstart (#105) * fix: 2.x scan ITs (#158) * fix: DefaultMismatchDetector tests (#157) * fix: ConcurrentBufferedMutator waits for both flushes to finish before closing (#161) * fix: additional minor fixes after review (#163) * fix: BufferedMutator review fixes (#164) - Simplify #flush(). - Add javadocs. - (sequential) Fix flush() exception handling. - (sequential) Move error handling to a separate inner class. * fix: PR fixes * fix: report zeroed error metrics after successful operations * fix: prepend MismatchDetectorCounter with Test to better reflect its purpose * feat: Client-side timestamping (#165) * fix: reduce timeout in TestBlocking to make the tests run faster * fix: asyncClose -> closePrimaryAndScheduleSecondaryClose * fix: remove unused Batcher#throwBatchDataExceptionIfPresent * fix: remove unused Comparators#compareRows * fix: extract failedReads from MatchingSuccessfulReadsResults to reduce confusion * feat: remove unused MirroringTracer from FailedMutationLogger * fix: MirroringAsyncBufferedMutator - test if failed mutation is passed to secondary write error consumer * fix: TestMirroringAsyncTableInputModification typo fix * fix: describe user flush() in Buffered Mutator in quickstart * fix: MirroringBufferedMutator - move flush threshold from BufferedMutations to FlushSerializer * refactor: MirroringBufferedMutator#close() - use AccumulatedExceptions insteand of List * BufferedMutator - add close timeout * AsyncBufferedMutator - add close timeout * fix: remove stale addSecondaryMutation comment * fix: add a comment that addSecondaryMutation handles failed writes * fix: unify implementations of flushBufferedMutatorBeforeClosing * fix: BufferedMutator - throw exceptions on close * fix: BufferedMutator - add comment explaining that chain of flush operations is created * fix: BufferedMutator - clarify comments * fix: Concurrent BufferedMutator - fix throwFlushExceptionIfAvailable * fix: explain why flush is called in Sequential BufferedMutator test * fix: TestConcurrentMirroringBufferedMutator - make waiting for calls explicit * refactor: BufferedMutator rename scheduleFlushAll() to scheduleFlush() * refactor: make FlushSerializer non-static * fix: BufferedMutator - use HierarchicalReferenceCounter * feat: Add MirroringConnection constructor taking MirroringConfiguration * refactor: move releaseReservations to finally * fix: use less convoluted example in lastFlushFutures description * fix: merge small Timeestamper files into a single file * fix: add a comment explaining which exceptions are forwarded to the user and why in SequentialMirroringBufferedMutator * fix: use UnsupportedOperationException instead of RuntimeException when forbidden mutation type is encountered * fix: add comment explaining why batch is complicated * fix: add a TODO to implement point writes without batch Co-authored-by: Mateusz Walkiewicz Co-authored-by: Adam Czajkowski Co-authored-by: Kajetan Boroszko --- .../bigtable/hbase/adapters/read/RowCell.java | 19 + .../pom.xml | 21 + .../hbase/mirroring/TestBlocking.java | 268 +++- .../hbase/mirroring/TestBufferedMutator.java | 192 ++- .../hbase/mirroring/TestErrorDetection.java | 187 +-- .../hbase/mirroring/TestMirroringTable.java | 240 +-- .../TestReadVerificationSampling.java | 67 +- .../utils/BlockingFlowControllerStrategy.java | 67 + .../utils/BlockingMismatchDetector.java | 57 + .../mirroring/utils/ConfigurationHelper.java | 14 +- .../hbase/mirroring/utils/ConnectionRule.java | 30 +- .../mirroring/utils/DatabaseHelpers.java | 4 + .../mirroring/utils/ExecutorServiceRule.java | 39 - .../utils/HBaseMiniClusterSingleton.java | 6 +- .../hbase/mirroring/utils/Helpers.java | 2 +- .../utils/MismatchDetectorCounter.java | 95 -- .../utils/MismatchDetectorCounterRule.java | 2 +- .../mirroring/utils/TestMismatchDetector.java | 173 +- .../utils/TestMismatchDetectorCounter.java | 140 ++ .../utils/TestWriteErrorConsumer.java | 18 +- .../mirroring/utils/compat/TableCreator.java | 29 + .../utils/compat/TableCreator1x.java | 37 + .../FailingHBaseHRegion.java | 134 +- .../bigtable-to-hbase-local-configuration.xml | 10 - .../hbase-to-bigtable-local-configuration.xml | 10 - .../hbase1_x/MirroringBufferedMutator.java | 558 ------- .../hbase1_x/MirroringConfiguration.java | 80 +- .../hbase1_x/MirroringConnection.java | 333 ++-- .../hbase1_x/MirroringOperationException.java | 133 ++ .../mirroring/hbase1_x/MirroringOptions.java | 154 +- .../hbase1_x/MirroringResultScanner.java | 209 ++- .../mirroring/hbase1_x/MirroringTable.java | 1010 +++++------- .../AsyncResultScannerWrapper.java | 125 +- .../asyncwrappers/AsyncTableWrapper.java | 91 +- .../ConcurrentMirroringBufferedMutator.java | 386 +++++ .../MirroringBufferedMutator.java | 620 +++++++ .../SequentialMirroringBufferedMutator.java | 528 ++++++ .../hbase1_x/utils/AccumulatedExceptions.java | 4 +- .../hbase1_x/utils/BatchHelpers.java | 383 ++++- .../mirroring/hbase1_x/utils/Batcher.java | 476 ++++++ ...ableThrowingIOAndInterruptedException.java | 8 + .../utils/CallableThrowingIOException.java | 8 + .../mirroring/hbase1_x/utils/Comparators.java | 39 +- .../DefaultSecondaryWriteErrorConsumer.java | 22 +- .../utils/MirroringConfigurationHelper.java | 208 ++- .../hbase1_x/utils/OperationUtils.java | 47 + .../hbase1_x/utils/RequestScheduling.java | 159 +- .../utils/SecondaryWriteErrorConsumer.java | 13 + ...econdaryWriteErrorConsumerWithMetrics.java | 4 +- .../utils/compat/CellComparatorCompat.java | 22 + .../compat/CellComparatorCompatImpl.java} | 23 +- .../hbase1_x/utils/faillog/Appender.java | 6 + .../utils/faillog/DefaultAppender.java | 63 +- .../utils/faillog/DefaultSerializer.java | 7 + ...{Logger.java => FailedMutationLogger.java} | 12 +- .../hbase1_x/utils/faillog/LogBuffer.java | 7 +- .../hbase1_x/utils/faillog/README.md | 1 - .../hbase1_x/utils/faillog/Serializer.java | 4 + .../flowcontrol/FlowControlStrategy.java | 7 + .../utils/flowcontrol/FlowController.java | 63 +- .../RequestCountingFlowControlStrategy.java | 59 +- .../RequestResourcesDescription.java | 3 +- .../SingleQueueFlowControlStrategy.java | 60 +- .../utils/flowcontrol/WriteOperationInfo.java | 51 + .../MirroringMetricsRecorder.java | 32 +- .../MirroringMetricsViews.java | 53 +- .../MirroringSpanConstants.java | 27 +- .../MirroringSpanFactory.java | 90 +- .../mirroringmetrics/MirroringTracer.java | 7 + .../HierarchicalReferenceCounter.java | 76 + .../ListenableReferenceCounter.java | 31 +- .../ReferenceCounter.java} | 14 +- .../ReferenceCounterUtils.java | 35 + .../reflection/ReflectionConstructor.java | 56 - .../utils/timestamper/CopyingTimestamper.java | 136 ++ .../utils/timestamper/InPlaceTimestamper.java | 77 + .../utils/timestamper/MonotonicTimer.java | 43 + .../utils/timestamper/NoopTimestamper.java | 47 + .../utils/timestamper/TimestampUtils.java | 48 + .../utils/timestamper/Timestamper.java | 56 + .../verification/DefaultMismatchDetector.java | 401 ++++- .../verification/MismatchDetector.java | 28 +- .../VerificationContinuationFactory.java | 42 +- .../hbase1_x/ExecutorServiceRule.java | 41 +- .../mirroring/hbase1_x/TestConnection.java | 120 ++ .../mirroring/hbase1_x/TestHelpers.java | 165 +- .../TestMirroringBufferedMutator.java | 488 ------ .../hbase1_x/TestMirroringConfiguration.java | 28 +- .../hbase1_x/TestMirroringConnection.java | 120 +- .../TestMirroringConnectionClosing.java | 262 +++ .../hbase1_x/TestMirroringMetrics.java | 138 +- .../TestMirroringResultScanner.java | 223 +-- .../hbase1_x/TestMirroringTable.java | 689 ++++++-- .../TestMirroringTableInputModification.java | 53 +- .../TestMirroringTableSynchronousMode.java | 344 ++++ .../hbase1_x/TestVerificationSampling.java | 43 +- .../TestAsyncResultScannerWrapper.java | 38 +- .../asyncwrappers/TestAsyncTableWrapper.java | 10 +- .../MirroringBufferedMutatorCommon.java | 180 +++ ...estConcurrentMirroringBufferedMutator.java | 411 +++++ .../TestMirroringBufferedMutator.java | 88 + ...estSequentialMirroringBufferedMutator.java | 388 +++++ .../utils/TestDefaultMismatchDetector.java | 69 + .../utils/faillog/DefaultAppenderTest.java | 40 +- ...est.java => FailedMutationLoggerTest.java} | 7 +- .../hbase1_x/utils/faillog/LogBufferTest.java | 17 +- .../utils/flowcontrol/TestFlowController.java | 103 +- ...estRequestCountingFlowControlStrategy.java | 23 +- .../TestRequestResourcesDescription.java | 39 +- .../TestListenableReferenceCounter.java | 7 +- .../timestamper/TestCopyingTimestamper.java | 190 +++ .../timestamper/TestInPlaceTimestamper.java | 190 +++ .../pom.xml | 8 + .../pom.xml | 441 +++++ .../hbase/MavenPlaceholderIntegration12x.java | 23 + .../hbase/mirroring/IntegrationTests.java | 93 ++ .../utils/compat/TableCreator2x.java | 44 + .../regionserver/FailingHBaseHRegion2.java | 154 ++ .../bigtable-to-hbase-local-configuration.xml | 56 + .../hbase-to-bigtable-local-configuration.xml | 56 + .../src/test/resources/log4j.properties | 7 + .../src/test/resources/prometheus.yml | 13 + .../pom.xml | 475 ++++++ .../hbase/MavenPlaceholderIntegration2x.java | 23 + .../hbase/mirroring/IntegrationTests.java | 33 + .../hbase/mirroring/TestBlocking.java | 174 ++ .../hbase/mirroring/TestErrorDetection.java | 307 ++++ .../mirroring/TestMirroringAsyncTable.java | 1421 +++++++++++++++++ .../mirroring/utils/AsyncConnectionRule.java | 42 + .../bigtable-to-hbase-local-configuration.xml | 56 + .../hbase-to-bigtable-local-configuration.xml | 56 + .../src/test/resources/log4j.properties | 7 + .../src/test/resources/prometheus.yml | 13 + .../MirroringAsyncBufferedMutator.java | 179 +++ .../hbase2_x/MirroringAsyncConfiguration.java | 111 +- .../hbase2_x/MirroringAsyncConnection.java | 325 +++- .../hbase2_x/MirroringAsyncTable.java | 482 ++++-- .../hbase2_x/MirroringConnection.java | 99 ++ .../mirroring/hbase2_x/MirroringTable.java | 86 + .../compat/CellComparatorCompatImpl.java | 29 + .../utils/futures/FutureConverter.java | 1 - .../TestMirroringAsyncBufferedMutator.java | 212 +++ .../TestMirroringAsyncConfiguration.java | 116 +- .../hbase2_x/TestMirroringAsyncTable.java | 590 ++++++- ...tMirroringAsyncTableInputModification.java | 141 +- .../hbase2_x/TestVerificationSampling.java | 270 ++++ .../utils/TestAsyncRequestScheduling.java | 93 +- .../hadoop/hbase/client/TestRegistry.java | 7 +- .../pom.xml | 14 + quickstart.md | 179 +++ 150 files changed, 15829 insertions(+), 4267 deletions(-) create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingFlowControllerStrategy.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingMismatchDetector.java delete mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ExecutorServiceRule.java delete mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounter.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetectorCounter.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator1x.java delete mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOperationException.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/ConcurrentMirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/SequentialMirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Batcher.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompat.java rename bigtable-hbase-mirroring-client-1.x-parent/{bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/SlowMismatchDetector.java => bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompatImpl.java} (52%) rename bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/{Logger.java => FailedMutationLogger.java} (88%) create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/WriteOperationInfo.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/HierarchicalReferenceCounter.java rename bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/{ => referencecounting}/ListenableReferenceCounter.java (65%) rename bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/{ListenableCloseable.java => referencecounting/ReferenceCounter.java} (54%) create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ReferenceCounterUtils.java delete mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/reflection/ReflectionConstructor.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/CopyingTimestamper.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/InPlaceTimestamper.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/MonotonicTimer.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/NoopTimestamper.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TimestampUtils.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/Timestamper.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestConnection.java delete mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnectionClosing.java rename bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/{asyncwrappers => }/TestMirroringResultScanner.java (59%) create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableSynchronousMode.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutatorCommon.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestConcurrentMirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestMirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestSequentialMirroringBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/TestDefaultMismatchDetector.java rename bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/{LoggerTest.java => FailedMutationLoggerTest.java} (88%) rename bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/{ => referencecounting}/TestListenableReferenceCounter.java (89%) create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestCopyingTimestamper.java create mode 100644 bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestInPlaceTimestamper.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/pom.xml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration12x.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator2x.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/org/apache/hadoop/hbase/regionserver/FailingHBaseHRegion2.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/log4j.properties create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/prometheus.yml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/pom.xml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration2x.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringAsyncTable.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/AsyncConnectionRule.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/log4j.properties create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/prometheus.yml create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringConnection.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringTable.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/compat/CellComparatorCompatImpl.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncBufferedMutator.java create mode 100644 bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestVerificationSampling.java create mode 100644 quickstart.md diff --git a/bigtable-client-core-parent/bigtable-hbase/src/main/java/com/google/cloud/bigtable/hbase/adapters/read/RowCell.java b/bigtable-client-core-parent/bigtable-hbase/src/main/java/com/google/cloud/bigtable/hbase/adapters/read/RowCell.java index 568d408180..691d5db6c7 100644 --- a/bigtable-client-core-parent/bigtable-hbase/src/main/java/com/google/cloud/bigtable/hbase/adapters/read/RowCell.java +++ b/bigtable-client-core-parent/bigtable-hbase/src/main/java/com/google/cloud/bigtable/hbase/adapters/read/RowCell.java @@ -24,6 +24,7 @@ import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.Type; import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.ClassSize; /** * RowCell is an alternative implementation of {@link KeyValue}. Unlike KeyValue, RowCell stores @@ -277,4 +278,22 @@ public String toString() { + "/" + Type.codeToType(getTypeByte()); } + + public long heapSize() { + long labelSize = ClassSize.ARRAYLIST; + for (String label : labels) { + labelSize += ClassSize.STRING + ClassSize.align(label.length()); + } + return ClassSize.align(rowArray.length) + + ClassSize.ARRAY + + ClassSize.align(familyArray.length) + + ClassSize.ARRAY + + ClassSize.align(qualifierArray.length) + + ClassSize.ARRAY + + 8 // timestamp + + ClassSize.align(valueArray.length) + + ClassSize.ARRAY + + labelSize + + ClassSize.OBJECT; + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/pom.xml b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/pom.xml index c615e2bb02..bc6464a3f7 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/pom.xml +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/pom.xml @@ -77,6 +77,7 @@ limitations under the License. hbase-to-bigtable-local-configuration.xml + com.google.cloud.bigtable.hbase.mirroring.utils.compat.TableCreator1x @@ -143,6 +144,7 @@ limitations under the License. bigtable-to-hbase-local-configuration.xml + com.google.cloud.bigtable.hbase.mirroring.utils.compat.TableCreator1x @@ -317,6 +319,12 @@ limitations under the License. 1.1.2 test + + org.mockito + mockito-core + 3.8.0 + test + org.apache.logging.log4j log4j-api @@ -329,6 +337,19 @@ limitations under the License. 2.14.1 test + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-1.x + 2.0.0-alpha2-SNAPSHOT + test-jar + test + + + org.apache.hbase + * + + + diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java index 90b50fbee1..33f2a02ea7 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java @@ -15,25 +15,33 @@ */ package com.google.cloud.bigtable.hbase.mirroring; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_MISMATCH_DETECTOR_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_CONNECTION_CONNECTION_TERMINATION_TIMEOUT; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS; import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.fail; +import com.google.cloud.bigtable.hbase.mirroring.utils.BlockingFlowControllerStrategy; +import com.google.cloud.bigtable.hbase.mirroring.utils.BlockingMismatchDetector; import com.google.cloud.bigtable.hbase.mirroring.utils.ConfigurationHelper; import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.ExecutorServiceRule; import com.google.cloud.bigtable.hbase.mirroring.utils.Helpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounter; import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; -import com.google.cloud.bigtable.hbase.mirroring.utils.SlowMismatchDetector; -import com.google.cloud.bigtable.hbase.mirroring.utils.ZipkinTracingRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; +import com.google.common.base.Stopwatch; +import com.google.common.util.concurrent.SettableFuture; import java.io.IOException; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Table; +import org.junit.Before; import org.junit.ClassRule; import org.junit.Rule; import org.junit.Test; @@ -42,110 +50,198 @@ @RunWith(JUnit4.class) public class TestBlocking { - static final byte[] columnFamily1 = "cf1".getBytes(); - static final byte[] qualifier1 = "q1".getBytes(); @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); - @ClassRule public static ZipkinTracingRule zipkinTracingRule = new ZipkinTracingRule(); - @Rule public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule(); - public DatabaseHelpers databaseHelpers = new DatabaseHelpers(connectionRule, executorServiceRule); + @Rule public ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + private DatabaseHelpers databaseHelpers = + new DatabaseHelpers(connectionRule, executorServiceRule); @Rule public MismatchDetectorCounterRule mismatchDetectorCounterRule = new MismatchDetectorCounterRule(); - @Test - public void testConnectionCloseBlocksUntilAllRequestsHaveBeenVerified() - throws IOException, InterruptedException { - long beforeTableClose; - long afterTableClose; - long afterConnectionClose; + private static final byte[] columnFamily1 = "cf1".getBytes(); + private static final byte[] qualifier1 = "q1".getBytes(); + + private TableName tableName; + @Before + public void setUp() throws IOException { + this.tableName = connectionRule.createTable(columnFamily1); + } + + @Test(timeout = 10000) + public void testConnectionCloseBlocksUntilAllRequestsHaveBeenVerified() + throws IOException, InterruptedException, TimeoutException, ExecutionException { Configuration config = ConfigurationHelper.newConfiguration(); - config.set(MIRRORING_MISMATCH_DETECTOR_CLASS, SlowMismatchDetector.class.getCanonicalName()); - SlowMismatchDetector.sleepTime = 1000; + config.set( + MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS, + BlockingMismatchDetector.Factory.class.getName()); + BlockingMismatchDetector.reset(); TableName tableName; - try (MirroringConnection connection = databaseHelpers.createConnection(config)) { - tableName = connectionRule.createTable(connection, columnFamily1); - try (Table t = connection.getTable(tableName)) { - for (int i = 0; i < 10; i++) { - Get get = new Get("1".getBytes()); - get.addColumn(columnFamily1, qualifier1); - t.get(get); - } - beforeTableClose = System.currentTimeMillis(); + final MirroringConnection connection = databaseHelpers.createConnection(config); + tableName = connectionRule.createTable(connection, columnFamily1); + try (Table t = connection.getTable(tableName)) { + for (int i = 0; i < 10; i++) { + Get get = new Get("1".getBytes()); + get.addColumn(columnFamily1, qualifier1); + t.get(get); } - afterTableClose = System.currentTimeMillis(); + } // There are in-flight requests but closing a Table object shouldn't block. + + final SettableFuture closingThreadStarted = SettableFuture.create(); + final SettableFuture closingThreadEnded = SettableFuture.create(); + + Thread closingThread = + new Thread() { + @Override + public void run() { + try { + closingThreadStarted.set(null); + connection.close(); + closingThreadEnded.set(null); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + }; + closingThread.start(); + + // Wait until closing thread starts. + closingThreadStarted.get(1, TimeUnit.SECONDS); + + // And give it some time to run, to verify that is has blocked. It will block until timeout is + // encountered or all async operations are finished. We will hit the second case here, because + // we will unblock the mismatch detector. + try { + closingThreadEnded.get(1, TimeUnit.SECONDS); + fail("should throw"); + } catch (TimeoutException ignored) { + // expected } - afterConnectionClose = System.currentTimeMillis(); - long tableCloseDuration = afterTableClose - beforeTableClose; - long connectionCloseDuration = afterConnectionClose - afterTableClose; - assertThat(tableCloseDuration).isLessThan(100); - assertThat(connectionCloseDuration).isGreaterThan(900); - assertThat(MismatchDetectorCounter.getInstance().getVerificationsStartedCounter()) - .isEqualTo(10); - assertThat(MismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()) + + // Finish running verifications + BlockingMismatchDetector.unblock(); + + // And now Connection#close() should unblock. + closingThreadEnded.get(1, TimeUnit.SECONDS); + + // And all verification should have finished. + assertThat(TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()) .isEqualTo(10); } - @Test - public void testSlowSecondaryConnection() throws IOException { + @Test(timeout = 10000) + public void flowControllerBlocksScheduling() + throws IOException, InterruptedException, ExecutionException, TimeoutException { Configuration config = ConfigurationHelper.newConfiguration(); - config.set(MIRRORING_MISMATCH_DETECTOR_CLASS, SlowMismatchDetector.class.getCanonicalName()); - SlowMismatchDetector.sleepTime = 100; - config.set(MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS, "10"); - TableName tableName; - byte[] row = "1".getBytes(); - try (MirroringConnection connection = databaseHelpers.createConnection(config)) { - tableName = connectionRule.createTable(connection, columnFamily1); - try (Table table = connection.getTable(tableName)) { - table.put(Helpers.createPut(row, columnFamily1, qualifier1, "1".getBytes())); - } - } + config.set( + MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS, + BlockingFlowControllerStrategy.Factory.class.getName()); + BlockingFlowControllerStrategy.reset(); - long startTime; - long endTime; - long duration; + final byte[] row = "1".getBytes(); + final SettableFuture closingThreadStarted = SettableFuture.create(); + final SettableFuture closingThreadEnded = SettableFuture.create(); try (MirroringConnection connection = databaseHelpers.createConnection(config)) { - startTime = System.currentTimeMillis(); try (Table table = connection.getTable(tableName)) { - for (int i = 0; i < 1000; i++) { - table.get(Helpers.createGet(row, columnFamily1, qualifier1)); - } - } - } - endTime = System.currentTimeMillis(); - duration = endTime - startTime; - // 1000 requests * 100 ms / 10 concurrent requests - assertThat(duration).isGreaterThan(10000); + Thread t = + new Thread() { + @Override + public void run() { + closingThreadStarted.set(null); + try { + table.put(Helpers.createPut(row, columnFamily1, qualifier1, "1".getBytes())); + closingThreadEnded.set(null); + } catch (IOException e) { + closingThreadEnded.setException(e); + throw new RuntimeException(e); + } + } + }; + t.start(); - config.set(MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS, "50"); - try (MirroringConnection connection = databaseHelpers.createConnection(config)) { - startTime = System.currentTimeMillis(); - try (Table table = connection.getTable(tableName)) { - for (int i = 0; i < 1000; i++) { - table.get(Helpers.createGet(row, columnFamily1, qualifier1)); - } - } - } - endTime = System.currentTimeMillis(); - duration = endTime - startTime; - // 1000 requests * 100 ms / 50 concurrent requests - assertThat(duration).isGreaterThan(2000); + // Wait until thread starts. + closingThreadStarted.get(1, TimeUnit.SECONDS); - config.set(MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS, "1000"); - try (MirroringConnection connection = databaseHelpers.createConnection(config)) { - startTime = System.currentTimeMillis(); - try (Table table = connection.getTable(tableName)) { - for (int i = 0; i < 1000; i++) { - table.get(Helpers.createGet(row, columnFamily1, qualifier1)); + // Give it some time to run, to verify that is has blocked. We are expecting that this + // operation will timeout because it is waiting for the FlowController to admit resources + // for the `put` operation. + try { + closingThreadEnded.get(1, TimeUnit.SECONDS); + fail("should throw"); + } catch (TimeoutException ignored) { + // expected } + // Unlock flow controller. + BlockingFlowControllerStrategy.unblock(); + // And verify that it has unblocked. + closingThreadEnded.get(1, TimeUnit.SECONDS); } } - endTime = System.currentTimeMillis(); - duration = endTime - startTime; - // 1000 requests * 100 ms / 1000 concurrent requests - assertThat(duration).isLessThan(1000); + } + + @Test(timeout = 10000) + public void testMirroringConnectionCloseTimeout() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + long timeoutMillis = 1000; + + final Configuration config = ConfigurationHelper.newConfiguration(); + config.set(MIRRORING_CONNECTION_CONNECTION_TERMINATION_TIMEOUT, String.valueOf(timeoutMillis)); + config.set( + MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS, + BlockingMismatchDetector.Factory.class.getName()); + BlockingMismatchDetector.reset(); + + final byte[] row = "1".getBytes(); + + final TableName tableName = connectionRule.createTable(columnFamily1); + final SettableFuture closingThreadStartedFuture = SettableFuture.create(); + final SettableFuture closingThreadFinishedFuture = SettableFuture.create(); + + Thread t = + new Thread() { + @Override + public void run() { + try { + // Not in try-with-resources, we are calling close() explicitly. + MirroringConnection connection = databaseHelpers.createConnection(config); + Table table = connection.getTable(tableName); + table.get(Helpers.createGet(row, columnFamily1, qualifier1)); + table.close(); + + closingThreadStartedFuture.set(connection); + Stopwatch stopwatch = Stopwatch.createStarted(); + connection.close(); + stopwatch.stop(); + closingThreadFinishedFuture.set(stopwatch.elapsed(TimeUnit.MILLISECONDS)); + } catch (IOException e) { + closingThreadFinishedFuture.setException(e); + } + } + }; + + t.start(); + + // Wait until the thread starts. + MirroringConnection c = closingThreadStartedFuture.get(1, TimeUnit.SECONDS); + // And wait for it to finish. It should time-out after 1 second. + long closeDuration = closingThreadFinishedFuture.get(3, TimeUnit.SECONDS); + // The closingThreadFinishedFuture did not timeout, thus we know that closing the connection + // lasted no longer than 3 seconds. + // We also need to check that it waited at least `timeoutMillis`. + // `closeDuration` is strictly greater than timeout because it includes some overhead, + // but `timeoutMillis` >> expected overhead, thus false-positives are unlikely. + assertThat(closeDuration).isAtLeast(timeoutMillis); + assertThat(c.getPrimaryConnection().isClosed()).isTrue(); + assertThat(c.getSecondaryConnection().isClosed()).isFalse(); + + // Finish asynchronous operation. + BlockingMismatchDetector.unblock(); + // Give it a second to run. + Thread.sleep(1000); + assertThat(c.getPrimaryConnection().isClosed()).isTrue(); + assertThat(c.getSecondaryConnection().isClosed()).isTrue(); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBufferedMutator.java index 5620221e71..e9810fc0aa 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBufferedMutator.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBufferedMutator.java @@ -15,30 +15,41 @@ */ package com.google.cloud.bigtable.hbase.mirroring; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_CONCURRENT_WRITES; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_OUTSTANDING_REQUESTS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SYNCHRONOUS_WRITES; import static com.google.common.truth.Truth.assertThat; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.fail; import com.google.cloud.bigtable.hbase.mirroring.utils.ConfigurationHelper; import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.ExecutorServiceRule; -import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounter; import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; import com.google.cloud.bigtable.hbase.mirroring.utils.PropagatingThread; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter.Mismatch; import com.google.cloud.bigtable.hbase.mirroring.utils.TestWriteErrorConsumer; import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegion; import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegionRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.common.primitives.Ints; import com.google.common.primitives.Longs; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; +import java.util.HashSet; import java.util.List; +import java.util.Set; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.HConstants.OperationStatusCode; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.BufferedMutator; import org.apache.hadoop.hbase.client.BufferedMutator.ExceptionListener; @@ -56,22 +67,45 @@ import org.junit.Rule; import org.junit.Test; import org.junit.runner.RunWith; -import org.junit.runners.JUnit4; +import org.junit.runners.Parameterized; -@RunWith(JUnit4.class) +@RunWith(Parameterized.class) public class TestBufferedMutator { @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); - - @Rule public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule(); - @Rule public FailingHBaseHRegionRule failingHBaseHRegionRule = new FailingHBaseHRegionRule(); - public DatabaseHelpers databaseHelpers = new DatabaseHelpers(connectionRule, executorServiceRule); + @Rule public ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + private DatabaseHelpers databaseHelpers = + new DatabaseHelpers(connectionRule, executorServiceRule); @Rule public MismatchDetectorCounterRule mismatchDetectorCounterRule = new MismatchDetectorCounterRule(); - static final byte[] columnFamily1 = "cf1".getBytes(); - static final byte[] columnQualifier1 = "cq1".getBytes(); + @Rule public FailingHBaseHRegionRule failingHBaseHRegionRule = new FailingHBaseHRegionRule(); + + private static final byte[] columnFamily1 = "cf1".getBytes(); + private static final byte[] qualifier1 = "cq1".getBytes(); + + @Parameterized.Parameters(name = "mutateConcurrently: {0}") + public static Object[] data() { + return new Object[] {false, true}; + } + + private final boolean mutateConcurrently; + + public TestBufferedMutator(boolean mutateConcurrently) { + this.mutateConcurrently = mutateConcurrently; + } + + private Configuration createConfiguration() { + Configuration configuration = ConfigurationHelper.newConfiguration(); + // MirroringOptions constructor verifies that MIRRORING_SYNCHRONOUS_WRITES is true when + // MIRRORING_CONCURRENT_WRITES is true (consult its constructor for more details). + // We are keeping MIRRORING_SYNCHRONOUS_WRITES false if we do not write concurrently (we are not + // testing the other case here anyways) and set it to true to meet the requirements otherwise. + configuration.set(MIRRORING_CONCURRENT_WRITES, String.valueOf(this.mutateConcurrently)); + configuration.set(MIRRORING_SYNCHRONOUS_WRITES, String.valueOf(this.mutateConcurrently)); + return configuration; + } @Test public void testBufferedMutatorPerformsMutations() throws IOException, InterruptedException { @@ -79,6 +113,10 @@ public void testBufferedMutatorPerformsMutations() throws IOException, Interrupt final int numMutationsInBatch = 100; final int numBatchesPerThread = 1000; + // We will run `numThreads` threads, each performing `numBatchesPerThread` batches of mutations, + // `numMutationsInBatch` each, using our MirroringClient. After all the operations are over we + // will verify if primary and secondary databases have the same contents. + class WorkerThread extends PropagatingThread { final BufferedMutator bufferedMutator; final int threadId; @@ -109,8 +147,9 @@ public void performTask() throws Throwable { } } - Configuration config = ConfigurationHelper.newConfiguration(); - config.set(MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS, "10000"); + Configuration config = this.createConfiguration(); + // Set flow controller requests limit to high value to increase concurrency. + config.set(MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_OUTSTANDING_REQUESTS, "10000"); TableName tableName; try (MirroringConnection connection = databaseHelpers.createConnection(config)) { @@ -126,7 +165,7 @@ public void performTask() throws Throwable { thread.propagatingJoin(); } } - } // wait for secondary writes + } // connection close will wait for secondary writes long readEntries = 0; try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -146,8 +185,8 @@ public void performTask() throws Throwable { assertEquals(numBatchesPerThread * numMutationsInBatch, readEntries); assertEquals( numBatchesPerThread * numMutationsInBatch / 100 + 1, - MismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); - assertEquals(0, MismatchDetectorCounter.getInstance().getErrorCount()); + TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); + assertEquals(0, TestMismatchDetectorCounter.getInstance().getErrorCount()); } private void verifyRowContents( @@ -162,20 +201,23 @@ private void verifyRowContents( } @Test - public void testBufferedMutatorReportsFailedSecondaryWrites() throws IOException { + public void testBufferedMutatorSecondaryErrorHandling() throws IOException { Assume.assumeTrue( ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); - FailingHBaseHRegion.failMutation(Longs.toByteArray(3), "row-3-error"); - FailingHBaseHRegion.failMutation(Longs.toByteArray(7), "row-7-error"); + FailingHBaseHRegion.failMutation( + Longs.toByteArray(3), OperationStatusCode.SANITY_CHECK_FAILURE, "row-3-error"); + FailingHBaseHRegion.failMutation( + Longs.toByteArray(7), OperationStatusCode.SANITY_CHECK_FAILURE, "row-7-error"); TestWriteErrorConsumer.clearErrors(); - Configuration configuration = ConfigurationHelper.newConfiguration(); + Configuration configuration = this.createConfiguration(); configuration.set( - "google.bigtable.mirroring.write-error-consumer.impl", - TestWriteErrorConsumer.class.getCanonicalName()); + "google.bigtable.mirroring.write-error-consumer.factory-impl", + TestWriteErrorConsumer.Factory.class.getName()); TableName tableName; + List flushExceptions = null; try (MirroringConnection connection = databaseHelpers.createConnection(configuration)) { tableName = connectionRule.createTable(connection, columnFamily1); BufferedMutatorParams params = new BufferedMutatorParams(tableName); @@ -183,15 +225,44 @@ public void testBufferedMutatorReportsFailedSecondaryWrites() throws IOException for (int intRowId = 0; intRowId < 10; intRowId++) { byte[] rowId = Longs.toByteArray(intRowId); Put put = new Put(rowId, System.currentTimeMillis()); - put.addColumn(columnFamily1, columnQualifier1, Longs.toByteArray(intRowId)); + put.addColumn(columnFamily1, qualifier1, Longs.toByteArray(intRowId)); bm.mutate(put); } - bm.flush(); + try { + bm.flush(); + } catch (IOException e) { + if (e instanceof RetriesExhaustedWithDetailsException) { + flushExceptions = ((RetriesExhaustedWithDetailsException) e).getCauses(); + } else { + fail(); + } + } + } + } // connection close will wait for secondary writes + + if (this.mutateConcurrently) { + // ConcurrentBufferedMutator does not report secondary write errors. + assertThat(TestWriteErrorConsumer.getErrorCount()).isEqualTo(0); + + assertNotNull(flushExceptions); + assertEquals(2, flushExceptions.size()); + + for (Throwable e : flushExceptions) { + Throwable cause = e.getCause(); + if (cause instanceof MirroringOperationException) { + assertEquals( + MirroringOperationException.DatabaseIdentifier.Secondary, + ((MirroringOperationException) cause).databaseIdentifier); + } else { + fail(); + } } - } // wait for secondary writes + } else { + // SequentialBufferedMutator should have reported two errors. + assertEquals(2, TestWriteErrorConsumer.getErrorCount()); - // Two failed writes should have been reported - assertThat(TestWriteErrorConsumer.getErrorCount()).isEqualTo(2); + assertNull(flushExceptions); + } try (MirroringConnection mirroringConnection = databaseHelpers.createConnection()) { try (Table table = mirroringConnection.getTable(tableName)) { @@ -205,21 +276,40 @@ public void testBufferedMutatorReportsFailedSecondaryWrites() throws IOException } } - // First mismatch happens when primary returns 3 and secondary returns 4 - assertThat(MismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(7); + List mismatches = TestMismatchDetectorCounter.getInstance().getMismatches(); + assertThat(mismatches.size()).isEqualTo(7); + assertThat(mismatches).contains(scannerMismatch(3, 4)); + assertThat(mismatches).contains(scannerMismatch(4, 5)); + assertThat(mismatches).contains(scannerMismatch(5, 6)); + assertThat(mismatches).contains(scannerMismatch(6, 8)); + assertThat(mismatches).contains(scannerMismatch(7, 9)); + assertThat(mismatches).contains(scannerMismatch(8, null)); + assertThat(mismatches).contains(scannerMismatch(9, null)); + } + + private Mismatch scannerMismatch(int primary, Integer secondary) { + return new Mismatch( + HBaseOperation.NEXT_MULTIPLE, + Longs.toByteArray(primary), + secondary == null ? null : Longs.toByteArray(secondary)); } @Test - public void testBufferedMutatorSkipsFailedPrimaryWrites() throws IOException { + public void testBufferedMutatorPrimaryErrorHandling() throws IOException { Assume.assumeTrue( ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); - FailingHBaseHRegion.failMutation(Longs.toByteArray(3), "row-3-error"); - FailingHBaseHRegion.failMutation(Longs.toByteArray(7), "row-7-error"); + FailingHBaseHRegion.failMutation( + Longs.toByteArray(3), OperationStatusCode.SANITY_CHECK_FAILURE, "row-3-error"); + FailingHBaseHRegion.failMutation( + Longs.toByteArray(7), OperationStatusCode.SANITY_CHECK_FAILURE, "row-7-error"); + + Configuration configuration = this.createConfiguration(); - final List thrownException = new ArrayList<>(); + final Set exceptionsThrown = new HashSet<>(); + final List exceptionRows = new ArrayList<>(); TableName tableName; - try (MirroringConnection connection = databaseHelpers.createConnection()) { + try (MirroringConnection connection = databaseHelpers.createConnection(configuration)) { tableName = connectionRule.createTable(connection, columnFamily1); BufferedMutatorParams params = new BufferedMutatorParams(tableName); params.listener( @@ -229,7 +319,8 @@ public void onException( RetriesExhaustedWithDetailsException e, BufferedMutator bufferedMutator) throws RetriesExhaustedWithDetailsException { for (int i = 0; i < e.getNumExceptions(); i++) { - thrownException.add(ByteBuffer.wrap(e.getRow(i).getRow())); + exceptionRows.add(ByteBuffer.wrap(e.getRow(i).getRow())); + exceptionsThrown.addAll(e.getCauses()); } } }); @@ -237,25 +328,42 @@ public void onException( for (int intRowId = 0; intRowId < 10; intRowId++) { byte[] rowId = Longs.toByteArray(intRowId); Put put = new Put(rowId, System.currentTimeMillis()); - put.addColumn(columnFamily1, columnQualifier1, Longs.toByteArray(intRowId)); + put.addColumn(columnFamily1, qualifier1, Longs.toByteArray(intRowId)); bm.mutate(put); } bm.flush(); } - } // wait for secondary writes + } // connection close will wait for secondary writes + List secondaryRows = new ArrayList<>(); try (MirroringConnection mirroringConnection = databaseHelpers.createConnection()) { Connection secondary = mirroringConnection.getSecondaryConnection(); Table table = secondary.getTable(tableName); ResultScanner scanner = table.getScanner(columnFamily1); - List rows = new ArrayList<>(); for (Result result : scanner) { - rows.add(Longs.fromByteArray(result.getRow())); + secondaryRows.add(Longs.fromByteArray(result.getRow())); } - assertThat(rows).containsExactly(0L, 1L, 2L, 4L, 5L, 6L, 8L, 9L); } - assertThat(thrownException).contains(ByteBuffer.wrap(Longs.toByteArray(3))); - assertThat(thrownException).contains(ByteBuffer.wrap(Longs.toByteArray(7))); + assertEquals(2, exceptionRows.size()); + assertThat(exceptionRows).contains(ByteBuffer.wrap(Longs.toByteArray(3))); + assertThat(exceptionRows).contains(ByteBuffer.wrap(Longs.toByteArray(7))); + + if (this.mutateConcurrently) { + assertEquals(2, exceptionsThrown.size()); + for (Throwable e : exceptionsThrown) { + Throwable cause = e.getCause(); + if (cause instanceof MirroringOperationException) { + assertEquals( + MirroringOperationException.DatabaseIdentifier.Primary, + ((MirroringOperationException) cause).databaseIdentifier); + } else { + fail(); + } + } + assertThat(secondaryRows).containsExactly(0L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L); + } else { + assertThat(secondaryRows).containsExactly(0L, 1L, 2L, 4L, 5L, 6L, 8L, 9L); + } } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java index 0dd30890a2..fd78223e11 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java @@ -15,34 +15,32 @@ */ package com.google.cloud.bigtable.hbase.mirroring; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_READ_VERIFICATION_RATE_PERCENT; +import static com.google.common.truth.Truth.assertThat; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import com.google.cloud.bigtable.hbase.mirroring.utils.ConfigurationHelper; import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.ExecutorServiceRule; import com.google.cloud.bigtable.hbase.mirroring.utils.Helpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounter; import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; -import com.google.cloud.bigtable.hbase.mirroring.utils.PrometheusStatsCollectionRule; import com.google.cloud.bigtable.hbase.mirroring.utils.PropagatingThread; -import com.google.cloud.bigtable.hbase.mirroring.utils.ZipkinTracingRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; import com.google.common.primitives.Longs; import java.io.IOException; -import java.nio.charset.Charset; import java.util.ArrayList; import java.util.List; import java.util.concurrent.TimeoutException; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Table; -import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.junit.ClassRule; import org.junit.Rule; import org.junit.Test; @@ -51,22 +49,19 @@ @RunWith(JUnit4.class) public class TestErrorDetection { - static final byte[] columnFamily1 = "cf1".getBytes(); - static final byte[] qualifier1 = "q1".getBytes(); @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); - @ClassRule public static ZipkinTracingRule zipkinTracingRule = new ZipkinTracingRule(); - - @ClassRule - public static PrometheusStatsCollectionRule prometheusStatsCollectionRule = - new PrometheusStatsCollectionRule(); - - @Rule public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule(); + @Rule public ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + private DatabaseHelpers databaseHelpers = + new DatabaseHelpers(connectionRule, executorServiceRule); @Rule public MismatchDetectorCounterRule mismatchDetectorCounterRule = new MismatchDetectorCounterRule(); - public DatabaseHelpers databaseHelpers = new DatabaseHelpers(connectionRule, executorServiceRule); + private static final byte[] columnFamily1 = "cf1".getBytes(); + private static final byte[] qualifier1 = "q1".getBytes(); + private static final byte[] row1 = "r1".getBytes(); + private static final byte[] value1 = "v1".getBytes(); @Test public void readsAndWritesArePerformed() throws IOException { @@ -75,16 +70,16 @@ public void readsAndWritesArePerformed() throws IOException { try (MirroringConnection connection = databaseHelpers.createConnection()) { tableName = connectionRule.createTable(connection, columnFamily1); try (Table t1 = connection.getTable(tableName)) { - t1.put(Helpers.createPut("1".getBytes(), columnFamily1, qualifier1, "1".getBytes())); + t1.put(Helpers.createPut(row1, columnFamily1, qualifier1, value1)); } } try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table t2 = connection.getTable(tableName)) { - Result result = t2.get(Helpers.createGet("1".getBytes(), columnFamily1, qualifier1)); - assertArrayEquals(result.getRow(), "1".getBytes()); - assertArrayEquals(result.getValue(columnFamily1, qualifier1), "1".getBytes()); - assertEquals(MismatchDetectorCounter.getInstance().getErrorCount(), 0); + Result result = t2.get(Helpers.createGet(row1, columnFamily1, qualifier1)); + assertArrayEquals(result.getRow(), row1); + assertArrayEquals(result.getValue(columnFamily1, qualifier1), value1); + assertEquals(TestMismatchDetectorCounter.getInstance().getErrorCount(), 0); } } } @@ -95,47 +90,49 @@ public void mismatchIsDetected() throws IOException, InterruptedException { try (MirroringConnection connection = databaseHelpers.createConnection()) { tableName = connectionRule.createTable(connection, columnFamily1); try (Table mirroredTable = connection.getTable(tableName)) { - mirroredTable.put( - Helpers.createPut("1".getBytes(), columnFamily1, qualifier1, "1".getBytes())); + mirroredTable.put(Helpers.createPut(row1, columnFamily1, qualifier1, value1)); } } try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table secondaryTable = connection.getSecondaryConnection().getTable(tableName)) { - secondaryTable.put( - Helpers.createPut("1".getBytes(), columnFamily1, qualifier1, "2".getBytes())); + secondaryTable.put(Helpers.createPut(row1, columnFamily1, qualifier1, value1)); } } try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table mirroredTable = connection.getTable(tableName)) { - Result result = - mirroredTable.get(Helpers.createGet("1".getBytes(), columnFamily1, qualifier1)); + Result result = mirroredTable.get(Helpers.createGet(row1, columnFamily1, qualifier1)); // Data from primary is returned. - assertArrayEquals(result.getRow(), "1".getBytes()); - assertArrayEquals(result.getValue(columnFamily1, qualifier1), "1".getBytes()); + assertArrayEquals(result.getRow(), row1); + assertArrayEquals(result.getValue(columnFamily1, qualifier1), value1); } } - assertEquals(1, MismatchDetectorCounter.getInstance().getErrorCount()); + assertEquals(1, TestMismatchDetectorCounter.getInstance().getErrorCount()); } @Test public void concurrentInsertionAndReadingInsertsWithScanner() - throws IOException, InterruptedException, TimeoutException { + throws IOException, TimeoutException { class WorkerThread extends PropagatingThread { private final long workerId; - private final long batchSize = 100; + private final long batchSize; private final Connection connection; private final TableName tableName; private final long entriesPerWorker; private final long numberOfBatches; public WorkerThread( - int workerId, Connection connection, TableName tableName, long numberOfBatches) { + int workerId, + Connection connection, + TableName tableName, + long numberOfBatches, + long batchSize) { this.workerId = workerId; this.connection = connection; + this.batchSize = batchSize; this.entriesPerWorker = numberOfBatches * batchSize; this.numberOfBatches = numberOfBatches; this.tableName = tableName; @@ -149,12 +146,12 @@ public void performTask() throws Throwable { for (long batchEntryId = 0; batchEntryId < this.batchSize; batchEntryId++) { long putIndex = this.workerId * this.entriesPerWorker + batchId * this.batchSize + batchEntryId; - long putValue = putIndex + 1; + long putTimestamp = putIndex + 1; byte[] putIndexBytes = Longs.toByteArray(putIndex); - byte[] putValueBytes = Longs.toByteArray(putValue); + byte[] putValueBytes = ("value-" + putIndex).getBytes(); puts.add( Helpers.createPut( - putIndexBytes, columnFamily1, qualifier1, putValue, putValueBytes)); + putIndexBytes, columnFamily1, qualifier1, putTimestamp, putValueBytes)); } table.put(puts); } @@ -164,6 +161,7 @@ public void performTask() throws Throwable { final int numberOfWorkers = 100; final int numberOfBatches = 100; + final long batchSize = 100; TableName tableName; try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -171,7 +169,8 @@ public void performTask() throws Throwable { List workers = new ArrayList<>(); for (int i = 0; i < numberOfWorkers; i++) { - PropagatingThread worker = new WorkerThread(i, connection, tableName, numberOfBatches); + PropagatingThread worker = + new WorkerThread(i, connection, tableName, numberOfBatches, batchSize); worker.start(); workers.add(worker); } @@ -181,115 +180,31 @@ public void performTask() throws Throwable { } } - try (MirroringConnection connection = databaseHelpers.createConnection()) { + Configuration configuration = ConfigurationHelper.newConfiguration(); + configuration.set(MIRRORING_READ_VERIFICATION_RATE_PERCENT, "100"); + + try (MirroringConnection connection = databaseHelpers.createConnection(configuration)) { try (Table t = connection.getTable(tableName)) { try (ResultScanner s = t.getScanner(columnFamily1, qualifier1)) { long counter = 0; for (Result r : s) { long row = Longs.fromByteArray(r.getRow()); - long value = Longs.fromByteArray(r.getValue(columnFamily1, qualifier1)); - assertEquals(counter, row); - assertEquals(counter + 1, value); + byte[] value = r.getValue(columnFamily1, qualifier1); + assertThat(counter).isEqualTo(row); + assertThat(("value-" + counter).getBytes()).isEqualTo(value); counter += 1; } } } } + assertEquals(0, TestMismatchDetectorCounter.getInstance().getErrorCount()); + // + 1 because we also verify the final `null` denoting end of results. assertEquals( - MismatchDetectorCounter.getInstance().getErrorsAsString(), - 0, - MismatchDetectorCounter.getInstance().getErrorCount()); - } - - @Test - public void conditionalMutationsPreserveConsistency() - throws IOException, InterruptedException, TimeoutException { - final int numberOfOperations = 50; - final int numberOfWorkers = 100; - - final byte[] canary = "canary-value".getBytes(); - - class WorkerThread extends PropagatingThread { - private final long workerId; - private final Connection connection; - private final TableName tableName; - - public WorkerThread(int workerId, Connection connection, TableName tableName) { - this.workerId = workerId; - this.connection = connection; - this.tableName = tableName; - } - - @Override - public void performTask() throws Throwable { - try (Table table = this.connection.getTable(tableName)) { - byte[] row = String.format("r%s", workerId).getBytes(); - table.put(Helpers.createPut(row, columnFamily1, qualifier1, 0, "0".getBytes())); - for (int i = 0; i < numberOfOperations; i++) { - byte[] currentValue = String.valueOf(i).getBytes(); - byte[] nextValue = String.valueOf(i + 1).getBytes(); - assertFalse( - table.checkAndPut( - row, - columnFamily1, - qualifier1, - CompareOp.NOT_EQUAL, - currentValue, - Helpers.createPut(row, columnFamily1, qualifier1, i, canary))); - assertTrue( - table.checkAndPut( - row, - columnFamily1, - qualifier1, - CompareOp.EQUAL, - currentValue, - Helpers.createPut(row, columnFamily1, qualifier1, i, nextValue))); - } - } - } - } - - TableName tableName; - try (MirroringConnection connection = databaseHelpers.createConnection()) { - tableName = connectionRule.createTable(connection, columnFamily1); - List workers = new ArrayList<>(); - for (int i = 0; i < numberOfWorkers; i++) { - PropagatingThread worker = new WorkerThread(i, connection, tableName); - worker.start(); - workers.add(worker); - } - - for (PropagatingThread worker : workers) { - worker.propagatingJoin(30000); - } - } - - try (MirroringConnection connection = databaseHelpers.createConnection()) { - try (Table t = connection.getTable(tableName)) { - try (ResultScanner s = t.getScanner(columnFamily1, qualifier1)) { - int counter = 0; - for (Result r : s) { - assertEquals( - new String(r.getRow(), Charset.defaultCharset()), - String.valueOf(numberOfOperations), - new String(r.getValue(columnFamily1, qualifier1), Charset.defaultCharset())); - counter++; - } - assertEquals(numberOfWorkers, counter); - } - } - } - - assertEquals( - numberOfWorkers + 1, // because null returned from the scanner is also verified. - MismatchDetectorCounter.getInstance().getVerificationsStartedCounter()); - assertEquals( - numberOfWorkers + 1, - MismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); + numberOfWorkers * numberOfBatches * batchSize + 1, + TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); assertEquals( - MismatchDetectorCounter.getInstance().getErrorsAsString(), - 0, - MismatchDetectorCounter.getInstance().getErrorCount()); + numberOfWorkers * numberOfBatches * batchSize + 1, + TestMismatchDetectorCounter.getInstance().getVerificationsStartedCounter()); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringTable.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringTable.java index 86a448470f..b610ba4896 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringTable.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringTable.java @@ -22,15 +22,15 @@ import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers; import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers.DatabaseSelector; -import com.google.cloud.bigtable.hbase.mirroring.utils.ExecutorServiceRule; import com.google.cloud.bigtable.hbase.mirroring.utils.Helpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounter; import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; import com.google.cloud.bigtable.hbase.mirroring.utils.TestWriteErrorConsumer; import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegion; import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegionRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.DefaultAppender; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOptions; import com.google.common.base.Predicate; import com.google.common.primitives.Longs; import java.io.File; @@ -56,26 +56,16 @@ import org.junit.Test; public class TestMirroringTable { - @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); - - @Rule public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule(); - - @Rule public FailingHBaseHRegionRule failingHBaseHRegionRule = new FailingHBaseHRegionRule(); + @Rule public ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + private DatabaseHelpers databaseHelpers = + new DatabaseHelpers(connectionRule, executorServiceRule); @Rule public MismatchDetectorCounterRule mismatchDetectorCounterRule = new MismatchDetectorCounterRule(); - final Predicate failPredicate = - new Predicate() { - @Override - public boolean apply(@NullableDecl byte[] bytes) { - return bytes.length == 8 && Longs.fromByteArray(bytes) % 2 == 0; - } - }; - - public DatabaseHelpers databaseHelpers = new DatabaseHelpers(connectionRule, executorServiceRule); + @Rule public FailingHBaseHRegionRule failingHBaseHRegionRule = new FailingHBaseHRegionRule(); static final byte[] columnFamily1 = "cf1".getBytes(); static final byte[] qualifier1 = "cq1".getBytes(); @@ -84,6 +74,14 @@ public boolean apply(@NullableDecl byte[] bytes) { static final byte[] qualifier4 = "cq4".getBytes(); static final byte[] qualifier5 = "cq5".getBytes(); + final Predicate failEvenRowKeysPredicate = + new Predicate() { + @Override + public boolean apply(@NullableDecl byte[] bytes) { + return bytes.length == 8 && Longs.fromByteArray(bytes) % 2 == 0; + } + }; + public static byte[] rowKeyFromId(int id) { return Longs.toByteArray(id); } @@ -92,19 +90,22 @@ public static byte[] rowKeyFromId(int id) { public void testPut() throws IOException { int databaseEntriesCount = 1000; - final TableName tableName1 = connectionRule.createTable(columnFamily1); + final TableName tableName = connectionRule.createTable(columnFamily1); try (MirroringConnection connection = databaseHelpers.createConnection()) { - try (Table t1 = connection.getTable(tableName1)) { + try (Table t1 = connection.getTable(tableName)) { for (int i = 0; i < databaseEntriesCount; i++) { t1.put(Helpers.createPut(i, columnFamily1, qualifier1)); } } } - databaseHelpers.verifyTableConsistency(tableName1); + databaseHelpers.verifyTableConsistency(tableName); + } - final TableName tableName2 = connectionRule.createTable(columnFamily1); + @Test + public void testPuts() throws IOException { + final TableName tableName = connectionRule.createTable(columnFamily1); try (MirroringConnection connection = databaseHelpers.createConnection()) { - try (Table t1 = connection.getTable(tableName2)) { + try (Table t1 = connection.getTable(tableName)) { int id = 0; for (int i = 0; i < 10; i++) { List puts = new ArrayList<>(); @@ -116,7 +117,7 @@ public void testPut() throws IOException { } } } - databaseHelpers.verifyTableConsistency(tableName2); + databaseHelpers.verifyTableConsistency(tableName); } @Test @@ -125,7 +126,8 @@ public void testPutWithPrimaryErrors() throws IOException { Assume.assumeTrue( ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation( + failEvenRowKeysPredicate, OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); final TableName tableName1 = connectionRule.createTable(columnFamily1); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -133,7 +135,7 @@ public void testPutWithPrimaryErrors() throws IOException { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); final int finalI = i; - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -174,7 +176,8 @@ public void testPutWithSecondaryErrors() throws IOException { int databaseEntriesCount = 1000; - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation( + failEvenRowKeysPredicate, OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); final TableName tableName1 = connectionRule.createTable(columnFamily1); @@ -186,7 +189,7 @@ public void testPutWithSecondaryErrors() throws IOException { } } } - databaseHelpers.verifyTableConsistency(tableName1, failPredicate); + databaseHelpers.verifyTableConsistency(tableName1, failEvenRowKeysPredicate); reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); @@ -205,7 +208,7 @@ public void testPutWithSecondaryErrors() throws IOException { } } } - databaseHelpers.verifyTableConsistency(tableName2, failPredicate); + databaseHelpers.verifyTableConsistency(tableName2, failEvenRowKeysPredicate); reportedErrorsContext2.assertNewErrorsReported(databaseEntriesCount / 2); } @@ -259,13 +262,13 @@ public void testDeleteWithPrimaryErrors() throws IOException { databaseHelpers.fillTable(tableName2, databaseEntriesCount, columnFamily1, qualifier1); FailingHBaseHRegion.failMutation( - failPredicate, OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + failEvenRowKeysPredicate, OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table t1 = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -290,6 +293,7 @@ public void run() throws IOException { } try { t1.delete(deletes); + fail("should have thrown"); } catch (RetriesExhaustedWithDetailsException e) { assertThat(e.getNumExceptions()).isEqualTo(50); assertThat(deletes.size()).isEqualTo(50); @@ -298,6 +302,7 @@ public void run() throws IOException { thrownRows.add(e.getRow(exceptionId).getRow()); } + // Delete removes successful operations from input list. Only failed rows remain. List notDeletedRows = new ArrayList<>(); for (Delete delete : deletes) { notDeletedRows.add(delete.getRow()); @@ -324,7 +329,8 @@ public void testDeleteWithSecondaryErrors() throws IOException { final TableName tableName2 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName2, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, OperationStatusCode.BAD_FAMILY, "failed"); + FailingHBaseHRegion.failMutation( + failEvenRowKeysPredicate, OperationStatusCode.BAD_FAMILY, "failed"); ReportedErrorsContext reportedErrorsContext1 = new ReportedErrorsContext(); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -373,41 +379,58 @@ public void testCheckAndPut() throws IOException { try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table table = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { + // We modify each row using for comparison a cell in its column qualifier1. + // These cells are set by fillTable() and in i-th row the cell contains value of + // Longs.toByteArray(i). byte[] rowKey = rowKeyFromId(i); - table.checkAndPut( - rowKey, - columnFamily1, - qualifier1, - Longs.toByteArray(i), - Helpers.createPut(i, columnFamily1, qualifier2)); - table.checkAndPut( - rowKey, - columnFamily1, - qualifier1, - CompareOp.EQUAL, - Longs.toByteArray(i), - Helpers.createPut(i, columnFamily1, qualifier3)); - table.checkAndPut( - rowKey, - columnFamily1, - qualifier1, - CompareOp.GREATER, - Longs.toByteArray(i + 1), - Helpers.createPut(i, columnFamily1, qualifier4)); - table.checkAndPut( - rowKey, - columnFamily1, - qualifier1, - CompareOp.NOT_EQUAL, - Longs.toByteArray(i), - Helpers.createPut(i, columnFamily1, qualifier5)); + // Column qualifier2 should have a value. + assertThat( + table.checkAndPut( + rowKey, + columnFamily1, + qualifier1, + /* compare to */ Longs.toByteArray(i), + Helpers.createPut(i, columnFamily1, qualifier2))) + .isTrue(); + // Column qualifier3 should have a value. + assertThat( + table.checkAndPut( + rowKey, + columnFamily1, + qualifier1, + CompareOp.EQUAL, + /* compare to */ Longs.toByteArray(i), + Helpers.createPut(i, columnFamily1, qualifier3))) + .isTrue(); + // Column qualifier4 should have a value. + assertThat( + table.checkAndPut( + rowKey, + columnFamily1, + qualifier1, + CompareOp.GREATER, + /* compare to */ Longs.toByteArray(i + 1), + Helpers.createPut(i, columnFamily1, qualifier4))) + .isTrue(); + // Column qualifier5 not should have a value. + assertThat( + table.checkAndPut( + rowKey, + columnFamily1, + qualifier1, + CompareOp.NOT_EQUAL, + /* compare to */ Longs.toByteArray(i), + Helpers.createPut(i, columnFamily1, qualifier5))) + .isFalse(); } } } + // We only modify rows present in the database before the loop. assertThat(databaseHelpers.countRows(tableName1, DatabaseSelector.PRIMARY)) .isEqualTo(databaseEntriesCount); + // There was a put iff checkAndPut returned true. assertThat(databaseHelpers.countCells(tableName1, DatabaseSelector.PRIMARY)) .isEqualTo(databaseEntriesCount * 4); @@ -424,14 +447,14 @@ public void testCheckAndPutPrimaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table table = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); final int finalI = i; - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -451,6 +474,7 @@ public void run() throws IOException { } assertThat(databaseHelpers.countRows(tableName1, DatabaseSelector.PRIMARY)) .isEqualTo(databaseEntriesCount); + // failEvenRowKeysPredicate fails every second put. assertThat(databaseHelpers.countCells(tableName1, DatabaseSelector.PRIMARY)) .isEqualTo((int) (databaseEntriesCount * 1.5)); databaseHelpers.verifyTableConsistency(tableName1); @@ -466,7 +490,7 @@ public void testCheckAndPutSecondaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); ReportedErrorsContext reportedErrorsContext1 = new ReportedErrorsContext(); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -498,6 +522,9 @@ public void testCheckAndPutSecondaryErrors() throws IOException { .isEqualTo(databaseEntriesCount / 2); assertThat(databaseHelpers.countCells(tableName1, DatabaseSelector.PRIMARY)) .isEqualTo(databaseEntriesCount * 2); + // failEvenRowKeysPredicate fails every second put. + assertThat(databaseHelpers.countCells(tableName1, DatabaseSelector.SECONDARY)) + .isEqualTo((int) (databaseEntriesCount * 1.5)); reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); } @@ -519,6 +546,9 @@ public void testCheckAndDelete() throws IOException { try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table table = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { + // We delete each row using for comparison a cell in its column qualifier1. + // These cells are set by fillTable() and in i-th row the cell contains value of + // Longs.toByteArray(i). byte[] rowKey = rowKeyFromId(i); assertThat( table.checkAndDelete( @@ -559,9 +589,11 @@ public void testCheckAndDelete() throws IOException { } } + // We only modify rows present in the database before the loop. assertThat(databaseHelpers.countRows(tableName1, DatabaseSelector.PRIMARY)) .isEqualTo(databaseEntriesCount); + // There was a delete iff checkAndDelete returned true. assertThat(databaseHelpers.countCells(tableName1, DatabaseSelector.PRIMARY)) .isEqualTo(databaseEntriesCount * 2); @@ -578,13 +610,13 @@ public void testCheckAndDeletePrimaryErrors() throws IOException { databaseHelpers.fillTable( tableName1, databaseEntriesCount, columnFamily1, qualifier1, qualifier2); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table table = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -622,7 +654,7 @@ public void testCheckAndDeleteSecondaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); ReportedErrorsContext reportedErrorsContext1 = new ReportedErrorsContext(); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -646,12 +678,15 @@ public void testCheckAndDeleteSecondaryErrors() throws IOException { DatabaseSelector.PRIMARY, Helpers.createScan(columnFamily1, qualifier1))) .isEqualTo(0); + assertThat(databaseHelpers.countCells(tableName1, DatabaseSelector.PRIMARY)).isEqualTo(0); assertThat( databaseHelpers.countRows( tableName1, DatabaseSelector.SECONDARY, Helpers.createScan(columnFamily1, qualifier1))) .isEqualTo(databaseEntriesCount / 2); + assertThat(databaseHelpers.countCells(tableName1, DatabaseSelector.SECONDARY)) + .isEqualTo(databaseEntriesCount / 2); reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); } @@ -719,13 +754,13 @@ public void testCheckAndMutatePrimaryErrors() throws IOException { databaseHelpers.fillTable( tableName1, databaseEntriesCount, columnFamily1, qualifier1, qualifier2); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table table = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -765,7 +800,7 @@ public void testCheckAndMutateSecondaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); ReportedErrorsContext reportedErrorsContext1 = new ReportedErrorsContext(); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -838,13 +873,13 @@ public void testIncrementPrimaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table table = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -871,7 +906,8 @@ public void testIncrementSecondaryErrors() throws IOException { TestWriteErrorConsumer.clearErrors(); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation( + failEvenRowKeysPredicate, OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); ReportedErrorsContext reportedErrorsContext1 = new ReportedErrorsContext(); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -927,13 +963,13 @@ public void testAppendPrimaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table table = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -961,7 +997,8 @@ public void testAppendSecondaryErrors() throws IOException { TestWriteErrorConsumer.clearErrors(); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation( + failEvenRowKeysPredicate, OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); ReportedErrorsContext reportedErrorsContext1 = new ReportedErrorsContext(); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -986,11 +1023,12 @@ public void testGet() throws IOException { try (Table t1 = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { byte[] rowKey = rowKeyFromId(i); - t1.get(Helpers.createGet(rowKey, columnFamily1, qualifier1)); + Result r = t1.get(Helpers.createGet(rowKey, columnFamily1, qualifier1)); + assertThat(r.value()).isEqualTo(Longs.toByteArray(i)); } } } - assertThat(MismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); } @Test @@ -1002,13 +1040,13 @@ public void testGetWithPrimaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table t1 = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -1019,7 +1057,7 @@ public void run() throws IOException { } } } - assertThat(MismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); } @Test @@ -1031,7 +1069,7 @@ public void testGetWithSecondaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table t1 = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { @@ -1040,11 +1078,11 @@ public void testGetWithSecondaryErrors() throws IOException { } } } - assertThat(MismatchDetectorCounter.getInstance().getErrorCount()) + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()) .isEqualTo(databaseEntriesCount / 2); - assertThat(MismatchDetectorCounter.getInstance().getErrorCount("failure")) + assertThat(TestMismatchDetectorCounter.getInstance().getFailureCount()) .isEqualTo(databaseEntriesCount / 2); - assertThat(MismatchDetectorCounter.getInstance().getErrorCount("mismatch")).isEqualTo(0); + assertThat(TestMismatchDetectorCounter.getInstance().getMismatchCount()).isEqualTo(0); } @Test @@ -1060,7 +1098,7 @@ public void testExists() throws IOException { } } } - assertThat(MismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); } @Test @@ -1072,13 +1110,13 @@ public void testExistsWithPrimaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table t1 = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { final byte[] rowKey = rowKeyFromId(i); - catchIOExceptionsIfWillThrow( + validateThrownExceptionIO( rowKey, new RunnableThrowingIO() { @Override @@ -1089,7 +1127,7 @@ public void run() throws IOException { } } } - assertThat(MismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); } @Test @@ -1101,7 +1139,7 @@ public void testExistsWithSecondaryErrors() throws IOException { final TableName tableName1 = connectionRule.createTable(columnFamily1); databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); try (MirroringConnection connection = databaseHelpers.createConnection()) { try (Table t1 = connection.getTable(tableName1)) { for (int i = 0; i < databaseEntriesCount; i++) { @@ -1110,11 +1148,11 @@ public void testExistsWithSecondaryErrors() throws IOException { } } } - assertThat(MismatchDetectorCounter.getInstance().getErrorCount()) + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()) .isEqualTo(databaseEntriesCount / 2); - assertThat(MismatchDetectorCounter.getInstance().getErrorCount("failure")) + assertThat(TestMismatchDetectorCounter.getInstance().getFailureCount()) .isEqualTo(databaseEntriesCount / 2); - assertThat(MismatchDetectorCounter.getInstance().getErrorCount("mismatch")).isEqualTo(0); + assertThat(TestMismatchDetectorCounter.getInstance().getMismatchCount()).isEqualTo(0); } @Test @@ -1148,7 +1186,7 @@ public void testBatchWithPrimaryErrors() throws IOException, InterruptedExceptio Assume.assumeTrue( ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); final TableName tableName = connectionRule.createTable(columnFamily1); try (MirroringConnection connection = databaseHelpers.createConnection()) { @@ -1165,6 +1203,10 @@ public void testBatchWithPrimaryErrors() throws IOException, InterruptedExceptio t1.batch(batch, result); } catch (RetriesExhaustedWithDetailsException e) { assertThat(e.getNumExceptions()).isEqualTo(50); + for (int i = 0; i < e.getNumExceptions(); i++) { + byte[] r = e.getRow(i).getRow(); + assertThat(failEvenRowKeysPredicate.apply(r)).isTrue(); + } int correctResults = 0; for (Object o : result) { if (o instanceof Result) { @@ -1186,7 +1228,7 @@ public void testBatchWithSecondaryErrors() throws IOException, InterruptedExcept int databaseEntriesCount = 1000; - FailingHBaseHRegion.failMutation(failPredicate, "failed"); + FailingHBaseHRegion.failMutation(failEvenRowKeysPredicate, "failed"); ReportedErrorsContext reportedErrorsContext1 = new ReportedErrorsContext(); final TableName tableName2 = connectionRule.createTable(columnFamily1); @@ -1207,16 +1249,21 @@ public void testBatchWithSecondaryErrors() throws IOException, InterruptedExcept } } } - databaseHelpers.verifyTableConsistency(tableName2, failPredicate); + databaseHelpers.verifyTableConsistency(tableName2, failEvenRowKeysPredicate); reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); } interface RunnableThrowingIO { + void run() throws IOException; } - private void catchIOExceptionsIfWillThrow(byte[] rowKey, RunnableThrowingIO runnable) { - boolean willThrow = failPredicate.apply(rowKey); + /** + * If rowKey row should fail (according to {@code failEvenRowKeysPredicate}) then expect the + * {@code runnable} to throw IOException, otherwise expect it not to throw anything.:w + */ + private void validateThrownExceptionIO(byte[] rowKey, RunnableThrowingIO runnable) { + boolean willThrow = failEvenRowKeysPredicate.apply(rowKey); try { runnable.run(); if (willThrow) { @@ -1231,7 +1278,8 @@ private void catchIOExceptionsIfWillThrow(byte[] rowKey, RunnableThrowingIO runn private static int getSecondaryWriteErrorLogMessagesWritten() throws IOException { Configuration configuration = ConfigurationHelper.newConfiguration(); - String prefixPath = configuration.get(DefaultAppender.PREFIX_PATH_KEY); + MirroringOptions mirroringOptions = new MirroringOptions(configuration); + String prefixPath = mirroringOptions.faillog.prefixPath; String[] prefixParts = prefixPath.split("/"); final String fileNamePrefix = prefixParts[prefixParts.length - 1]; String[] directoryParts = Arrays.copyOf(prefixParts, prefixParts.length - 1); diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestReadVerificationSampling.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestReadVerificationSampling.java index e6f0df4bd6..6de7b83c3b 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestReadVerificationSampling.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestReadVerificationSampling.java @@ -21,11 +21,11 @@ import com.google.cloud.bigtable.hbase.mirroring.utils.ConfigurationHelper; import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.ExecutorServiceRule; import com.google.cloud.bigtable.hbase.mirroring.utils.Helpers; -import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounter; import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegionRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; import com.google.common.collect.ImmutableList; import com.google.common.primitives.Longs; @@ -41,26 +41,26 @@ @RunWith(JUnit4.class) public class TestReadVerificationSampling { - @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); - - @Rule public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule(); - @Rule public FailingHBaseHRegionRule failingHBaseHRegionRule = new FailingHBaseHRegionRule(); - public DatabaseHelpers databaseHelpers = new DatabaseHelpers(connectionRule, executorServiceRule); + @Rule public ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + private DatabaseHelpers databaseHelpers = + new DatabaseHelpers(connectionRule, executorServiceRule); @Rule public MismatchDetectorCounterRule mismatchDetectorCounterRule = new MismatchDetectorCounterRule(); - static final byte[] family1 = "cf1".getBytes(); - static final byte[] qualifier1 = "cq1".getBytes(); + @Rule public FailingHBaseHRegionRule failingHBaseHRegionRule = new FailingHBaseHRegionRule(); + + private static final byte[] columnFamily1 = "cf1".getBytes(); + private static final byte[] qualifier1 = "cq1".getBytes(); @Test public void testPartialReadsVerificationOnGets() throws IOException { TableName tableName; try (MirroringConnection connection = databaseHelpers.createConnection()) { - tableName = connectionRule.createTable(connection, family1); - databaseHelpers.fillTable(tableName, 1000, family1, qualifier1); + tableName = connectionRule.createTable(connection, columnFamily1); + databaseHelpers.fillTable(tableName, 1000, columnFamily1, qualifier1); } Configuration configuration = ConfigurationHelper.newConfiguration(); @@ -70,32 +70,37 @@ public void testPartialReadsVerificationOnGets() throws IOException { try (Table table = connection.getTable(tableName)) { for (int i = 0; i < 500; i++) { int index = (i % 100) * 10; - table.get(Helpers.createGet(Longs.toByteArray(index), family1, qualifier1)); + table.get(Helpers.createGet(Longs.toByteArray(index), columnFamily1, qualifier1)); table.get( ImmutableList.of( - Helpers.createGet(Longs.toByteArray(index + 1), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 2), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 3), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 4), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 5), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 6), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 7), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 8), family1, qualifier1), - Helpers.createGet(Longs.toByteArray(index + 9), family1, qualifier1))); + Helpers.createGet(Longs.toByteArray(index + 1), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 2), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 3), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 4), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 5), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 6), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 7), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 8), columnFamily1, qualifier1), + Helpers.createGet(Longs.toByteArray(index + 9), columnFamily1, qualifier1))); } } } - assertThat(MismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()) - .isLessThan(20); + // ReadSampler decides whether to verify read per whole request (e.g. it verifies all Gets in a + // batch or none). We sent 1000 requests. None of them should fail or be a mismatch. + // Our ReadSampler is probabilistic. We have a 0.01 chance of verifying a request. + // Assuming that our random number generator really is random and there are no unexpected + // errors, probability that this counter is at least 25 is about 0.000042. + assertThat(TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()) + .isLessThan(25); } @Test public void testAllReadsVerificationOnGets() throws IOException { TableName tableName; try (MirroringConnection connection = databaseHelpers.createConnection()) { - tableName = connectionRule.createTable(connection, family1); - databaseHelpers.fillTable(tableName, 10, family1, qualifier1); + tableName = connectionRule.createTable(connection, columnFamily1); + databaseHelpers.fillTable(tableName, 10, columnFamily1, qualifier1); } Configuration configuration = ConfigurationHelper.newConfiguration(); @@ -104,20 +109,20 @@ public void testAllReadsVerificationOnGets() throws IOException { try (MirroringConnection connection = databaseHelpers.createConnection(configuration)) { try (Table table = connection.getTable(tableName)) { for (int i = 0; i < 10; i++) { - table.get(Helpers.createGet(Longs.toByteArray(i), family1, qualifier1)); + table.get(Helpers.createGet(Longs.toByteArray(i), columnFamily1, qualifier1)); } } } - assertEquals(10, MismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); + assertEquals(10, TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); } @Test public void testNoReadsVerificationOnGets() throws IOException { TableName tableName; try (MirroringConnection connection = databaseHelpers.createConnection()) { - tableName = connectionRule.createTable(connection, family1); - databaseHelpers.fillTable(tableName, 10, family1, qualifier1); + tableName = connectionRule.createTable(connection, columnFamily1); + databaseHelpers.fillTable(tableName, 10, columnFamily1, qualifier1); } Configuration configuration = ConfigurationHelper.newConfiguration(); @@ -126,11 +131,11 @@ public void testNoReadsVerificationOnGets() throws IOException { try (MirroringConnection connection = databaseHelpers.createConnection(configuration)) { try (Table table = connection.getTable(tableName)) { for (int i = 0; i < 10; i++) { - table.get(Helpers.createGet(Longs.toByteArray(i), family1, qualifier1)); + table.get(Helpers.createGet(Longs.toByteArray(i), columnFamily1, qualifier1)); } } } - assertEquals(0, MismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); + assertEquals(0, TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingFlowControllerStrategy.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingFlowControllerStrategy.java new file mode 100644 index 0000000000..8a3ce5502c --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingFlowControllerStrategy.java @@ -0,0 +1,67 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring.utils; + +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOptions; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowControlStrategy; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.common.util.concurrent.ListenableFuture; +import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; + +public class BlockingFlowControllerStrategy implements FlowControlStrategy { + + private static SettableFuture unblock; + + public static void reset() { + unblock = SettableFuture.create(); + } + + public static void unblock() { + unblock.set(null); + } + + @Override + public ListenableFuture asyncRequestResourceReservation( + RequestResourcesDescription resourcesDescription) { + final SettableFuture reservation = SettableFuture.create(); + unblock.addListener( + new Runnable() { + @Override + public void run() { + reservation.set( + new ResourceReservation() { + @Override + public void release() {} + }); + } + }, + MoreExecutors.directExecutor()); + return reservation; + } + + @Override + public void releaseResource(RequestResourcesDescription resource) {} + + public static class Factory implements FlowControlStrategy.Factory { + + @Override + public FlowControlStrategy create(MirroringOptions options) throws Throwable { + return new BlockingFlowControllerStrategy(); + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingMismatchDetector.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingMismatchDetector.java new file mode 100644 index 0000000000..8128b5aad5 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/BlockingMismatchDetector.java @@ -0,0 +1,57 @@ +/* + * Copyright 2015 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring.utils; + +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import com.google.common.util.concurrent.SettableFuture; +import java.util.concurrent.ExecutionException; + +public class BlockingMismatchDetector extends TestMismatchDetector { + + private static SettableFuture unblock; + + public static void reset() { + unblock = SettableFuture.create(); + } + + public static void unblock() { + unblock.set(null); + } + + public BlockingMismatchDetector(MirroringTracer tracer, Integer i) { + super(tracer, i); + } + + @Override + public void onVerificationStarted() { + super.onVerificationStarted(); + try { + unblock.get(); + } catch (InterruptedException | ExecutionException e) { + throw new RuntimeException(e); + } + } + + public static class Factory implements MismatchDetector.Factory { + + @Override + public MismatchDetector create( + MirroringTracer mirroringTracer, Integer maxLoggedBinaryValueLength) { + return new BlockingMismatchDetector(mirroringTracer, maxLoggedBinaryValueLength); + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConfigurationHelper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConfigurationHelper.java index a7b7b1c2b5..f1a91507b5 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConfigurationHelper.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConfigurationHelper.java @@ -33,8 +33,8 @@ private static void printDatabasesInfo() { public static Configuration newConfiguration() { Configuration configuration = new Configuration(); - fillDefaults(configuration); configuration.addResource(System.getProperty("integration-tests-config-file-name")); + fillDefaults(configuration); return configuration; } @@ -71,11 +71,15 @@ private static void fillDefaults(Configuration configuration) { "com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection"); configuration.setIfUnset( - "google.bigtable.mirroring.mismatch-detector.impl", - TestMismatchDetector.class.getCanonicalName()); + "hbase.client.async.connection.impl", + "com.google.cloud.bigtable.mirroring.hbase2_x.MirroringAsyncConnection"); + + configuration.setIfUnset( + "google.bigtable.mirroring.mismatch-detector.factory-impl", + TestMismatchDetector.Factory.class.getName()); configuration.setIfUnset( - "google.bigtable.mirroring.write-error-consumer.impl", - TestWriteErrorConsumer.class.getCanonicalName()); + "google.bigtable.mirroring.write-error-consumer.factory-impl", + TestWriteErrorConsumer.Factory.class.getName()); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConnectionRule.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConnectionRule.java index ce29b3ea37..763b259290 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConnectionRule.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ConnectionRule.java @@ -15,16 +15,14 @@ */ package com.google.cloud.bigtable.hbase.mirroring.utils; +import com.google.cloud.bigtable.hbase.mirroring.utils.compat.TableCreator; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; import java.io.IOException; import java.util.Random; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.HColumnDescriptor; -import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.junit.rules.ExternalResource; @@ -52,14 +50,18 @@ public MirroringConnection createConnection(ExecutorService executorService) thr public MirroringConnection createConnection( ExecutorService executorService, Configuration configuration) throws IOException { - if (baseMiniCluster != null) { - baseMiniCluster.updateConfigurationWithHbaseMiniClusterProps(configuration); - } + updateConfigurationWithHbaseMiniClusterProps(configuration); Connection conn = ConnectionFactory.createConnection(configuration, executorService); return (MirroringConnection) conn; } + public void updateConfigurationWithHbaseMiniClusterProps(Configuration configuration) { + if (baseMiniCluster != null) { + baseMiniCluster.updateConfigurationWithHbaseMiniClusterProps(configuration); + } + } + @Override protected void after() { if (baseMiniCluster != null) { @@ -74,7 +76,7 @@ public String createTableName() { public TableName createTable(byte[]... columnFamilies) throws IOException { String tableName = createTableName(); - try (MirroringConnection connection = createConnection(Executors.newSingleThreadExecutor())) { + try (MirroringConnection connection = createConnection(Executors.newFixedThreadPool(1))) { createTable(connection.getPrimaryConnection(), tableName, columnFamilies); createTable(connection.getSecondaryConnection(), tableName, columnFamilies); return TableName.valueOf(tableName); @@ -91,12 +93,14 @@ public TableName createTable(MirroringConnection connection, byte[]... columnFam public void createTable(Connection connection, String tableName, byte[]... columnFamilies) throws IOException { - Admin admin = connection.getAdmin(); - - HTableDescriptor descriptor = new HTableDescriptor(TableName.valueOf(tableName)); - for (byte[] columnFamilyName : columnFamilies) { - descriptor.addFamily(new HColumnDescriptor(columnFamilyName)); + try { + TableCreator tableCreator = + (TableCreator) + Class.forName(System.getProperty("integrations.compat.table-creator-impl")) + .newInstance(); + tableCreator.createTable(connection, tableName, columnFamilies); + } catch (InstantiationException | IllegalAccessException | ClassNotFoundException e) { + throw new IOException(e); } - admin.createTable(descriptor); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/DatabaseHelpers.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/DatabaseHelpers.java index 7e411ef618..29cda9f598 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/DatabaseHelpers.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/DatabaseHelpers.java @@ -19,6 +19,7 @@ import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Comparators; import com.google.common.base.Predicate; @@ -143,6 +144,9 @@ public boolean apply(@NullableDecl byte[] bytes) { }); } + // Predicate allows to exclude some rows from primary database from consistency check. + // The row is excluded iff predicate applied to its key returns true. + // It's useful after injecting some errors in secondary database. public void verifyTableConsistency(TableName tableName, Predicate secondaryErrorPredicate) throws IOException { try (MirroringConnection connection = createConnection()) { diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ExecutorServiceRule.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ExecutorServiceRule.java deleted file mode 100644 index b9998af254..0000000000 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/ExecutorServiceRule.java +++ /dev/null @@ -1,39 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.google.cloud.bigtable.hbase.mirroring.utils; - -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; -import java.util.concurrent.TimeUnit; -import org.junit.rules.ExternalResource; - -public class ExecutorServiceRule extends ExternalResource { - public ExecutorService executorService; - - public void before() { - executorService = Executors.newCachedThreadPool(); - } - - public void after() { - executorService.shutdown(); - try { - executorService.awaitTermination(10, TimeUnit.SECONDS); - } catch (InterruptedException e) { - e.printStackTrace(); - } - } -} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/HBaseMiniClusterSingleton.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/HBaseMiniClusterSingleton.java index fcebfa5382..63acd13a70 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/HBaseMiniClusterSingleton.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/HBaseMiniClusterSingleton.java @@ -33,7 +33,11 @@ public class HBaseMiniClusterSingleton { public HBaseMiniClusterSingleton() { Configuration configuration = HBaseConfiguration.create(); - configuration.set("hbase.hregion.impl", FailingHBaseHRegion.class.getCanonicalName()); + configuration.set( + "hbase.hregion.impl", + System.getProperty( + "integrations.compat.failingregion.impl", + FailingHBaseHRegion.class.getCanonicalName())); helper = new HBaseTestingUtility(configuration); } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/Helpers.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/Helpers.java index c6097bfb42..512927e0cb 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/Helpers.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/Helpers.java @@ -43,7 +43,7 @@ public static Put createPut( public static Put createPut(int id, byte[] family, byte[] qualifier) { byte[] rowAndValue = Longs.toByteArray(id); - return createPut(rowAndValue, family, qualifier, id, rowAndValue); + return createPut(rowAndValue, family, qualifier, rowAndValue); } public static Get createGet(byte[] row, byte[] family, byte[] qualifier) { diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounter.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounter.java deleted file mode 100644 index 86c9d4cd31..0000000000 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounter.java +++ /dev/null @@ -1,95 +0,0 @@ -/* - * Copyright 2015 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package com.google.cloud.bigtable.hbase.mirroring.utils; - -import com.google.common.base.Joiner; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -public class MismatchDetectorCounter { - private int errorCounter; - private int verificationsStartedCounter; - private int verificationsFinishedCounter; - private List errors; - private Map typeErrorMap; - - private MismatchDetectorCounter() { - clearErrors(); - } - - private static MismatchDetectorCounter instance; - - public static synchronized MismatchDetectorCounter getInstance() { - if (instance == null) { - instance = new MismatchDetectorCounter(); - } - return instance; - } - - public synchronized void reportError(String operation, String errorType, String details) { - this.errors.add(String.format("%s %s %s", operation, errorType, details)); - if (!this.typeErrorMap.containsKey(errorType)) { - this.typeErrorMap.put(errorType, 0); - } - this.typeErrorMap.put(errorType, this.typeErrorMap.get(errorType) + 1); - this.errorCounter += 1; - } - - public synchronized void clearErrors() { - this.errorCounter = 0; - this.verificationsStartedCounter = 0; - this.verificationsFinishedCounter = 0; - this.errors = new ArrayList<>(); - this.typeErrorMap = new HashMap<>(); - } - - public synchronized int getErrorCount() { - return this.errorCounter; - } - - public synchronized int getErrorCount(String type) { - if (!this.typeErrorMap.containsKey(type)) { - return 0; - } - return this.typeErrorMap.get(type); - } - - public synchronized List getErrors() { - return this.errors; - } - - public synchronized String getErrorsAsString() { - return Joiner.on('\n').join(this.errors); - } - - public synchronized void onVerificationStarted() { - this.verificationsStartedCounter++; - } - - public synchronized void onVerificationFinished() { - this.verificationsFinishedCounter++; - } - - public int getVerificationsStartedCounter() { - return verificationsStartedCounter; - } - - public int getVerificationsFinishedCounter() { - return verificationsFinishedCounter; - } -} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounterRule.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounterRule.java index 4acb1ffe1d..000a51ab01 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounterRule.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/MismatchDetectorCounterRule.java @@ -21,6 +21,6 @@ public class MismatchDetectorCounterRule extends ExternalResource { @Override public void before() { - MismatchDetectorCounter.getInstance().clearErrors(); + TestMismatchDetectorCounter.getInstance().clearErrors(); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetector.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetector.java index 1c81dcbc31..fbf7a6a258 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetector.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetector.java @@ -19,25 +19,39 @@ import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import java.util.Arrays; import java.util.List; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; public class TestMismatchDetector implements MismatchDetector { - private final MismatchDetectorCounter mismatchCounter = MismatchDetectorCounter.getInstance(); + private final TestMismatchDetectorCounter mismatchCounter = + TestMismatchDetectorCounter.getInstance(); private final MirroringTracer tracer; - public TestMismatchDetector(MirroringTracer tracer) { + public TestMismatchDetector(MirroringTracer tracer, Integer ignored) { this.tracer = tracer; } - public void onError(HBaseOperation operation, String errorType, String details) { - System.out.printf("onError: %s: %s, %s", operation, errorType, details); - mismatchCounter.reportError(operation.getString(), errorType, details); - if (errorType.equals("mismatch")) { - tracer.metricsRecorder.recordReadMismatches(operation, 1); - } + public void onFailure(HBaseOperation operation, Throwable throwable) { + System.out.printf("onFailure: %s: %s", operation, throwable.getMessage()); + mismatchCounter.reportFailure(operation, throwable); + } + + public void onMismatch(HBaseOperation operation, byte[] primary, byte[] secondary) { + System.out.printf( + "onMismatch: %s: %s", operation, String.format("%s != %s", primary, secondary)); + mismatchCounter.reportMismatch(operation, primary, secondary); + tracer.metricsRecorder.recordReadMismatches(operation, 1); + } + + public void onLengthMismatch(HBaseOperation operation, int primaryLength, int secondaryLength) { + System.out.printf( + "onMismatch: %s: %s", + operation, String.format("length: %s != %s", primaryLength, secondaryLength)); + mismatchCounter.reportLengthMismatch(operation, primaryLength, secondaryLength); + tracer.metricsRecorder.recordReadMismatches(operation, 1); } public void onVerificationStarted() { @@ -51,7 +65,10 @@ public void onVerificationFinished() { public void exists(Get request, boolean primary, boolean secondary) { onVerificationStarted(); if (primary != secondary) { - onError(HBaseOperation.EXISTS, "mismatch", String.format("%s != %s", primary, secondary)); + onMismatch( + HBaseOperation.EXISTS, + new byte[] {booleanToByte(primary)}, + new byte[] {booleanToByte(secondary)}); } onVerificationFinished(); } @@ -59,20 +76,25 @@ public void exists(Get request, boolean primary, boolean secondary) { @Override public void exists(Get request, Throwable throwable) { onVerificationStarted(); - onError(HBaseOperation.EXISTS, "failure", throwable.getMessage()); + onFailure(HBaseOperation.EXISTS, throwable); onVerificationFinished(); } @Override public void existsAll(List request, boolean[] primary, boolean[] secondary) { onVerificationStarted(); - for (int i = 0; i < primary.length; i++) { - if (primary[i] != secondary[i]) { - onError( - HBaseOperation.EXISTS_ALL, - "mismatch", - String.format("%s != %s", primary[i], secondary[i])); + if (!Arrays.equals(primary, secondary)) { + byte[] primaryValues = new byte[primary.length]; + byte[] secondaryValues = new byte[secondary.length]; + + for (int i = 0; i < primary.length; i++) { + primaryValues[i] = booleanToByte(primary[i]); + } + + for (int i = 0; i < secondary.length; i++) { + secondaryValues[i] = booleanToByte(secondary[i]); } + onMismatch(HBaseOperation.EXISTS_ALL, primaryValues, secondaryValues); } onVerificationFinished(); } @@ -80,14 +102,14 @@ public void existsAll(List request, boolean[] primary, boolean[] secondary) @Override public void existsAll(List request, Throwable throwable) { onVerificationStarted(); - onError(HBaseOperation.EXISTS_ALL, "failure", throwable.getMessage()); + onFailure(HBaseOperation.EXISTS_ALL, throwable); onVerificationFinished(); } public void get(Get request, Result primary, Result secondary) { onVerificationStarted(); if (!Comparators.resultsEqual(primary, secondary)) { - onError(HBaseOperation.GET, "mismatch", String.format("%s != %s", primary, secondary)); + onMismatch(HBaseOperation.GET, primary.value(), secondary.value()); } onVerificationFinished(); } @@ -95,7 +117,7 @@ public void get(Get request, Result primary, Result secondary) { @Override public void get(Get request, Throwable throwable) { onVerificationStarted(); - onError(HBaseOperation.GET, "failure", throwable.getMessage()); + onFailure(HBaseOperation.GET, throwable); onVerificationFinished(); } @@ -103,19 +125,13 @@ public void get(Get request, Throwable throwable) { public void get(List request, Result[] primary, Result[] secondary) { onVerificationStarted(); if (primary.length != secondary.length) { - onError( - HBaseOperation.GET_LIST, - "length mismatch", - String.format("%s != %s", primary.length, secondary.length)); + onLengthMismatch(HBaseOperation.GET_LIST, primary.length, secondary.length); return; } for (int i = 0; i < primary.length; i++) { if (Comparators.resultsEqual(primary[i], secondary[i])) { - onError( - HBaseOperation.GET_LIST, - "mismatch", - String.format("(index=%s), %s != %s", i, primary[i], secondary[i])); + onMismatch(HBaseOperation.GET_LIST, primary[i].value(), secondary[i].value()); } } onVerificationFinished(); @@ -124,54 +140,33 @@ public void get(List request, Result[] primary, Result[] secondary) { @Override public void get(List request, Throwable throwable) { onVerificationStarted(); - onError(HBaseOperation.GET_LIST, "failed", throwable.getMessage()); + onFailure(HBaseOperation.GET_LIST, throwable); onVerificationFinished(); } @Override - public void scannerNext(Scan request, int entriesAlreadyRead, Result primary, Result secondary) { - onVerificationStarted(); - if (!Comparators.resultsEqual(primary, secondary)) { - onError(HBaseOperation.NEXT, "mismatch", String.format("%s != %s", primary, secondary)); - } - onVerificationFinished(); + public void scannerNext( + Scan request, ScannerResultVerifier verifier, Result primary, Result secondary) { + verifier.verify(new Result[] {primary}, new Result[] {secondary}); } @Override public void scannerNext(Scan request, int entriesAlreadyRead, Throwable throwable) { onVerificationStarted(); - onError(HBaseOperation.NEXT, "failed", throwable.getMessage()); + onFailure(HBaseOperation.NEXT, throwable); + onVerificationFinished(); } @Override public void scannerNext( - Scan request, int entriesAlreadyRead, Result[] primary, Result[] secondary) { - onVerificationStarted(); - if (primary.length != secondary.length) { - onError( - HBaseOperation.NEXT_MULTIPLE, - "length mismatch", - String.format("%s != %s", primary.length, secondary.length)); - return; - } - - for (int i = 0; i < primary.length; i++) { - if (!Comparators.resultsEqual(primary[i], secondary[i])) { - onError( - HBaseOperation.NEXT_MULTIPLE, - "mismatch", - String.format( - "(index=%s), %s != %s", entriesAlreadyRead + i, primary[i], secondary[i])); - } - } - onVerificationFinished(); + Scan request, ScannerResultVerifier verifier, Result[] primary, Result[] secondary) { + verifier.verify(primary, secondary); } @Override - public void scannerNext( - Scan request, int entriesAlreadyRead, int entriesRequested, Throwable throwable) { + public void scannerNext(Scan request, Throwable throwable) { onVerificationStarted(); - onError(HBaseOperation.NEXT_MULTIPLE, "failure", throwable.getMessage()); + onFailure(HBaseOperation.NEXT_MULTIPLE, throwable); onVerificationFinished(); } @@ -179,19 +174,13 @@ public void scannerNext( public void batch(List request, Result[] primary, Result[] secondary) { onVerificationStarted(); if (primary.length != secondary.length) { - onError( - HBaseOperation.BATCH, - "length mismatch", - String.format("%s != %s", primary.length, secondary.length)); + onLengthMismatch(HBaseOperation.BATCH, primary.length, secondary.length); return; } for (int i = 0; i < primary.length; i++) { if (!Comparators.resultsEqual(primary[i], secondary[i])) { - onError( - HBaseOperation.BATCH, - "mismatch", - String.format("(index=%s), %s != %s", i, primary[i], secondary[i])); + onMismatch(HBaseOperation.BATCH, primary[i].value(), secondary[i].value()); } } onVerificationFinished(); @@ -200,7 +189,57 @@ public void batch(List request, Result[] primary, Result[] secondary) { @Override public void batch(List request, Throwable throwable) { onVerificationStarted(); - onError(HBaseOperation.BATCH, "failed", throwable.getMessage()); + onFailure(HBaseOperation.BATCH, throwable); onVerificationFinished(); } + + private static byte booleanToByte(boolean b) { + return (byte) (b ? 1 : 0); + } + + @Override + public MismatchDetector.ScannerResultVerifier createScannerResultVerifier( + Scan request, int maxBufferedResults) { + return new MemorylessScannerResultVerifier(); + } + + public static class Factory implements MismatchDetector.Factory { + + @Override + public MismatchDetector create( + MirroringTracer mirroringTracer, Integer maxLoggedBinaryValueLength) { + return new TestMismatchDetector(mirroringTracer, maxLoggedBinaryValueLength); + } + } + + public class MemorylessScannerResultVerifier implements MismatchDetector.ScannerResultVerifier { + public MemorylessScannerResultVerifier() {} + + @Override + public void verify(Result[] primary, Result[] secondary) { + onVerificationStarted(); + if (primary.length != secondary.length) { + onLengthMismatch(HBaseOperation.NEXT_MULTIPLE, primary.length, secondary.length); + return; + } + + for (int i = 0; i < primary.length; i++) { + if (!Comparators.resultsEqual(primary[i], secondary[i])) { + onMismatch( + HBaseOperation.NEXT_MULTIPLE, valueOrNull(primary[i]), valueOrNull(secondary[i])); + } + } + onVerificationFinished(); + } + + private byte[] valueOrNull(Result result) { + if (result == null) { + return null; + } + return result.value(); + } + + @Override + public void flush() {} + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetectorCounter.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetectorCounter.java new file mode 100644 index 0000000000..b67978d738 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestMismatchDetectorCounter.java @@ -0,0 +1,140 @@ +/* + * Copyright 2015 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring.utils; + +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +public class TestMismatchDetectorCounter { + private int verificationsStartedCounter; + private int verificationsFinishedCounter; + private int lengthMismatches; + private List mismatches; + private List failures; + + private TestMismatchDetectorCounter() { + clearErrors(); + } + + private static TestMismatchDetectorCounter instance; + + public static synchronized TestMismatchDetectorCounter getInstance() { + if (instance == null) { + instance = new TestMismatchDetectorCounter(); + } + return instance; + } + + public synchronized void reportFailure(HBaseOperation operation, Throwable error) { + this.failures.add(new Failure(operation, error)); + } + + public synchronized void reportMismatch( + HBaseOperation operation, byte[] primary, byte[] secondary) { + this.mismatches.add(new Mismatch(operation, primary, secondary)); + } + + public synchronized void reportLengthMismatch( + HBaseOperation operation, int primaryLength, int secondaryLength) { + this.lengthMismatches += 1; + } + + public synchronized void clearErrors() { + this.lengthMismatches = 0; + this.verificationsStartedCounter = 0; + this.verificationsFinishedCounter = 0; + this.mismatches = new ArrayList<>(); + this.failures = new ArrayList<>(); + } + + public synchronized int getErrorCount() { + return this.getLengthMismatchesCount() + this.getFailureCount() + this.getMismatchCount(); + } + + public synchronized int getFailureCount() { + return this.failures.size(); + } + + public synchronized int getMismatchCount() { + return this.mismatches.size(); + } + + public synchronized int getLengthMismatchesCount() { + return this.lengthMismatches; + } + + public synchronized void onVerificationStarted() { + this.verificationsStartedCounter++; + } + + public synchronized void onVerificationFinished() { + this.verificationsFinishedCounter++; + } + + public int getVerificationsStartedCounter() { + return verificationsStartedCounter; + } + + public int getVerificationsFinishedCounter() { + return verificationsFinishedCounter; + } + + public synchronized List getMismatches() { + return this.mismatches; + } + + public static class Mismatch { + public final byte[] primary; + public final byte[] secondary; + public final HBaseOperation operation; + + public Mismatch(HBaseOperation operation, byte[] primary, byte[] secondary) { + this.primary = primary; + this.secondary = secondary; + this.operation = operation; + } + + @Override + public boolean equals(Object o) { + if (o instanceof Mismatch) { + Mismatch other = (Mismatch) o; + return this.operation == other.operation + && Arrays.equals(this.primary, other.primary) + && Arrays.equals(this.secondary, other.secondary); + } + return false; + } + + @Override + public int hashCode() { + return this.operation.hashCode() + + Arrays.hashCode(this.primary) + + Arrays.hashCode(this.secondary); + } + } + + public static class Failure { + public final Throwable error; + public final HBaseOperation operation; + + public Failure(HBaseOperation operation, Throwable throwable) { + this.operation = operation; + this.error = throwable; + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestWriteErrorConsumer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestWriteErrorConsumer.java index 40597396db..52cabb209e 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestWriteErrorConsumer.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/TestWriteErrorConsumer.java @@ -17,8 +17,9 @@ import com.google.cloud.bigtable.mirroring.hbase1_x.utils.DefaultSecondaryWriteErrorConsumer; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Logger; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.FailedMutationLogger; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.common.base.Preconditions; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; import org.apache.hadoop.hbase.client.Mutation; @@ -29,8 +30,8 @@ public class TestWriteErrorConsumer implements SecondaryWriteErrorConsumer { static AtomicInteger errorCount = new AtomicInteger(0); private final DefaultSecondaryWriteErrorConsumer secondaryWriteErrorConsumer; - public TestWriteErrorConsumer(Logger writeErrorLogger) { - this.secondaryWriteErrorConsumer = new DefaultSecondaryWriteErrorConsumer(writeErrorLogger); + public TestWriteErrorConsumer(FailedMutationLogger failedMutationLogger) { + this.secondaryWriteErrorConsumer = new DefaultSecondaryWriteErrorConsumer(failedMutationLogger); } public static int getErrorCount() { @@ -43,7 +44,7 @@ public static void clearErrors() { @Override public void consume(HBaseOperation operation, Row row, Throwable cause) { - assert row instanceof Mutation || row instanceof RowMutations; + Preconditions.checkArgument(row instanceof Mutation || row instanceof RowMutations); errorCount.addAndGet(1); this.secondaryWriteErrorConsumer.consume(operation, row, cause); } @@ -51,9 +52,16 @@ public void consume(HBaseOperation operation, Row row, Throwable cause) { @Override public void consume(HBaseOperation operation, List operations, Throwable cause) { for (Row row : operations) { - assert row instanceof Mutation || row instanceof RowMutations; + Preconditions.checkArgument(row instanceof Mutation || row instanceof RowMutations); } errorCount.addAndGet(operations.size()); this.secondaryWriteErrorConsumer.consume(operation, operations, cause); } + + public static class Factory implements SecondaryWriteErrorConsumer.Factory { + @Override + public SecondaryWriteErrorConsumer create(FailedMutationLogger failedMutationLogger) { + return new TestWriteErrorConsumer(failedMutationLogger); + } + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator.java new file mode 100644 index 0000000000..faef262776 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator.java @@ -0,0 +1,29 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring.utils.compat; + +import java.io.IOException; +import org.apache.hadoop.hbase.client.Connection; + +/** + * Integration tests in bigtable-hbase-mirroring-client-1.x-integration-tests project are used to + * test both 1.x MirroringConnection and 2.x MirroringConnection. This interface provides a + * compatibility layer between 1.x and 2.x for creating a table and is used only in ITs. + */ +public interface TableCreator { + void createTable(Connection connection, String tableName, byte[]... columnFamilies) + throws IOException; +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator1x.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator1x.java new file mode 100644 index 0000000000..3c48094193 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator1x.java @@ -0,0 +1,37 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring.utils.compat; + +import java.io.IOException; +import org.apache.hadoop.hbase.HColumnDescriptor; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; + +public class TableCreator1x implements TableCreator { + @Override + public void createTable(Connection connection, String tableName, byte[]... columnFamilies) + throws IOException { + Admin admin = connection.getAdmin(); + + HTableDescriptor descriptor = new HTableDescriptor(TableName.valueOf(tableName)); + for (byte[] columnFamilyName : columnFamilies) { + descriptor.addFamily(new HColumnDescriptor(columnFamilyName)); + } + admin.createTable(descriptor); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/failinghbaseminicluster/FailingHBaseHRegion.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/failinghbaseminicluster/FailingHBaseHRegion.java index beb408625c..353acc0640 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/failinghbaseminicluster/FailingHBaseHRegion.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/failinghbaseminicluster/FailingHBaseHRegion.java @@ -15,6 +15,7 @@ */ package com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster; +import com.google.common.base.Preconditions; import com.google.common.base.Predicate; import java.io.IOException; import java.nio.ByteBuffer; @@ -44,35 +45,11 @@ import org.apache.hadoop.hbase.regionserver.RegionServerServices; import org.apache.hadoop.hbase.wal.WAL; +/** + * Implementation of {@link HRegion} that rejects operations on registered rows. Used to simulate + * server-side errors in integration tests which use in-memory MiniCluster as HBase server. + */ public class FailingHBaseHRegion extends HRegion { - private static Map fakeErrorsMap = new ConcurrentHashMap<>(); - private static Map, OperationStatus> errorConditionMap = - new ConcurrentHashMap<>(); - - public static void failMutation(byte[] row, String message) { - failMutation(row, OperationStatusCode.FAILURE, message); - } - - public static void failMutation(byte[] row, OperationStatusCode statusCode, String message) { - ByteBuffer byteBufferRow = ByteBuffer.wrap(row); - assert !fakeErrorsMap.containsKey(byteBufferRow); - fakeErrorsMap.put(byteBufferRow, new OperationStatus(statusCode, message)); - } - - public static void failMutation(Predicate failCondition, String message) { - failMutation(failCondition, OperationStatusCode.FAILURE, message); - } - - public static void failMutation( - Predicate failCondition, OperationStatusCode statusCode, String message) { - errorConditionMap.put(failCondition, new OperationStatus(statusCode, message)); - } - - public static void clearFailures() { - fakeErrorsMap.clear(); - errorConditionMap.clear(); - } - public FailingHBaseHRegion( HRegionFileSystem fs, WAL wal, @@ -100,31 +77,16 @@ public void mutateRow(RowMutations rm) throws IOException { } @Override - public OperationStatus[] batchMutate(Mutation[] mutations, long nonceGroup, long nonce) - throws IOException { - - OperationStatus[] result = new OperationStatus[mutations.length]; - List mutationsToRun = new ArrayList<>(); - - for (int i = 0; i < mutations.length; i++) { - Mutation mutation = mutations[i]; - OperationStatus r = processRowNoThrow(mutation.getRow()); - if (r != null) { - result[i] = r; - } else { - mutationsToRun.add(mutation); - } - } - OperationStatus[] superResult = - super.batchMutate(mutationsToRun.toArray(new Mutation[0]), nonceGroup, nonce); - int redIndex = 0; - for (int i = 0; i < mutations.length; i++) { - if (result[i] == null) { - result[i] = superResult[redIndex]; - redIndex++; - } - } - return result; + public OperationStatus[] batchMutate( + Mutation[] mutations, final long nonceGroup, final long nonce) throws IOException { + return batchMutateWithFailures( + mutations, + new FunctionThrowing() { + @Override + public OperationStatus[] apply(Mutation[] mutations) throws IOException { + return FailingHBaseHRegion.super.batchMutate(mutations, nonceGroup, nonce); + } + }); } @Override @@ -145,7 +107,35 @@ public Result append(Append mutation, long nonceGroup, long nonce) throws IOExce return super.append(mutation, nonceGroup, nonce); } - private static OperationStatus processRowNoThrow(byte[] rowKey) { + private static Map fakeErrorsMap = new ConcurrentHashMap<>(); + private static Map, OperationStatus> errorConditionMap = + new ConcurrentHashMap<>(); + + public static void failMutation(byte[] row, String message) { + failMutation(row, OperationStatusCode.FAILURE, message); + } + + public static void failMutation(byte[] row, OperationStatusCode statusCode, String message) { + ByteBuffer byteBufferRow = ByteBuffer.wrap(row); + Preconditions.checkArgument(!fakeErrorsMap.containsKey(byteBufferRow)); + fakeErrorsMap.put(byteBufferRow, new OperationStatus(statusCode, message)); + } + + public static void failMutation(Predicate failCondition, String message) { + failMutation(failCondition, OperationStatusCode.FAILURE, message); + } + + public static void failMutation( + Predicate failCondition, OperationStatusCode statusCode, String message) { + errorConditionMap.put(failCondition, new OperationStatus(statusCode, message)); + } + + public static void clearFailures() { + fakeErrorsMap.clear(); + errorConditionMap.clear(); + } + + public static OperationStatus processRowNoThrow(byte[] rowKey) { ByteBuffer row = ByteBuffer.wrap(rowKey); if (fakeErrorsMap.containsKey(row)) { return fakeErrorsMap.get(row); @@ -158,7 +148,7 @@ private static OperationStatus processRowNoThrow(byte[] rowKey) { return null; } - private static void processRowThrow(byte[] rowKey) throws IOException { + public static void processRowThrow(byte[] rowKey) throws IOException { throwError(processRowNoThrow(rowKey)); } @@ -177,4 +167,38 @@ private static void throwError(OperationStatus operationStatus) throws IOExcepti throw new DoNotRetryIOException(operationStatus.getExceptionMsg()); } } + + public interface FunctionThrowing { + R apply(T t) throws E; + } + + public static OperationStatus[] batchMutateWithFailures( + Mutation[] mutations, FunctionThrowing op) + throws IOException { + OperationStatus[] result = new OperationStatus[mutations.length]; + List mutationsToRun = new ArrayList<>(); + + // We fill some positions in result[] if the mutation is to err + // according to fakeErrorsMap and errorConditionMap. + for (int i = 0; i < mutations.length; i++) { + Mutation mutation = mutations[i]; + OperationStatus r = processRowNoThrow(mutation.getRow()); + if (r != null) { + result[i] = r; + } else { + mutationsToRun.add(mutation); + } + } + + // We fill the remaining positions in result[] with results of op(). + OperationStatus[] superResult = op.apply(mutationsToRun.toArray(new Mutation[0])); + int correspondingSuperResultIdx = 0; + for (int i = 0; i < mutations.length; i++) { + if (result[i] == null) { + result[i] = superResult[correspondingSuperResultIdx]; + correspondingSuperResultIdx++; + } + } + return result; + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml index 40a7096ddc..4ca393bc5c 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml @@ -38,14 +38,4 @@ google.bigtable.mirroring.write-error-log.appender.prefix-path /tmp/write-error-log - - - google.bigtable.mirroring.write-error-log.appender.max-buffer-size - 8388608 - - - - google.bigtable.mirroring.write-error-log.appender.drop-on-overflow - false - diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml index ba85cac6ee..740c8039af 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml @@ -38,14 +38,4 @@ google.bigtable.mirroring.write-error-log.appender.prefix-path /tmp/write-error-log - - - google.bigtable.mirroring.write-error-log.appender.max-buffer-size - 8388608 - - - - google.bigtable.mirroring.write-error-log.appender.drop-on-overflow - false - diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringBufferedMutator.java deleted file mode 100644 index 0cd009870a..0000000000 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringBufferedMutator.java +++ /dev/null @@ -1,558 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package com.google.cloud.bigtable.mirroring.hbase1_x; - -import com.google.api.core.InternalApi; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; -import com.google.common.util.concurrent.FutureCallback; -import com.google.common.util.concurrent.Futures; -import com.google.common.util.concurrent.ListenableFuture; -import com.google.common.util.concurrent.ListeningExecutorService; -import com.google.common.util.concurrent.MoreExecutors; -import com.google.common.util.concurrent.SettableFuture; -import io.opencensus.common.Scope; -import java.io.IOException; -import java.util.ArrayList; -import java.util.Collections; -import java.util.Iterator; -import java.util.List; -import java.util.Set; -import java.util.concurrent.Callable; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.ExecutorService; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.client.BufferedMutator; -import org.apache.hadoop.hbase.client.BufferedMutatorParams; -import org.apache.hadoop.hbase.client.Connection; -import org.apache.hadoop.hbase.client.Mutation; -import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; -import org.apache.hadoop.hbase.client.Row; -import org.checkerframework.checker.nullness.compatqual.NullableDecl; - -/** - * BufferedMutator that mirrors writes performed on first database to secondary database. - * - *

We want to perform a secondary write only if we are certain that it was successfully applied - * on primary database. The HBase 1.x API doesn't give its user any indication when asynchronous - * writes were performed, only performing a synchronous {@link BufferedMutator#flush()} ensures that - * all previously buffered mutations are done. To achieve our goal we store a copy of all mutations - * sent to primary BufferedMutator in a internal buffer. When size of the buffer reaches a threshold - * of {@link - * com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper#MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH} - * bytes, we perform a flush in a worker thread. After flush we pass collected mutations to - * secondary BufferedMutator and flush it. Writes that have failed on primary are not forwarded to - * secondary, writes that have failed on secondary are forwarded to {@link - * SecondaryWriteErrorConsumer#consume(HBaseOperation, Mutation, Throwable)} handler. - * - *

Moreover, we perform our custom flow control to prevent unbounded growth of memory - calls to - * mutate() might block if secondary database lags behind. We account size of all operations that - * were placed in primary BufferedMutator but weren't yet executed and confirmed on secondary - * BufferedMutator (or until we are informed that they have failed on primary). - */ -@InternalApi("For internal usage only") -public class MirroringBufferedMutator implements BufferedMutator { - private static final Logger Log = new Logger(MirroringBufferedMutator.class); - private final BufferedMutator primaryBufferedMutator; - private final BufferedMutator secondaryBufferedMutator; - private final FlowController flowController; - private final ListeningExecutorService executorService; - private final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer; - private final MirroringTracer mirroringTracer; - - /** Configuration that was used to configure this instance. */ - private final Configuration configuration; - /** Parameters that were used to create this instance. */ - private final BufferedMutatorParams bufferedMutatorParams; - /** Size that {@link #mutationsBuffer} should reach to invoke a flush() on primary database. */ - private final long mutationsBufferFlushIntervalBytes; - - /** - * Set of {@link Row}s that were passed to primary BufferedMutator but failed. We create a entry - * in this collection every time our error handler is called by primary BufferedMutator. Those - * entries are consulted before we perform mutations on secondary BufferedMutator, if a {@link - * Row} instance scheduled for insertion is in this collection, then it is omitted and - * corresponding entry is removed from the set. - */ - private final Set failedPrimaryOperations = - Collections.newSetFromMap(new ConcurrentHashMap()); - - /** - * Internal buffer with mutations that were passed to primary BufferedMutator but were not yet - * scheduled to be confirmed and written to the secondary database in {@link #scheduleFlush()}. - */ - private ArrayList mutationsBuffer; - - private long mutationsBufferSizeBytes; - - /** - * {@link ResourceReservation}s obtained from {@link #flowController} that represent resources - * used by mutations kept in {@link #mutationsBuffer}. - */ - private ArrayList reservations; - - /** - * Exceptions caught when performing asynchronous flush() on primary BufferedMutator that should - * be rethrown to inform the user about failed writes. - */ - private List exceptionsToBeThrown = new ArrayList<>(); - - private boolean closed = false; - - public MirroringBufferedMutator( - Connection primaryConnection, - Connection secondaryConnection, - BufferedMutatorParams bufferedMutatorParams, - MirroringConfiguration configuration, - FlowController flowController, - ExecutorService executorService, - SecondaryWriteErrorConsumer secondaryWriteErrorConsumer, - MirroringTracer mirroringTracer) - throws IOException { - final ExceptionListener userListener = bufferedMutatorParams.getListener(); - ExceptionListener primaryErrorsListener = - new ExceptionListener() { - @Override - public void onException( - RetriesExhaustedWithDetailsException e, BufferedMutator bufferedMutator) - throws RetriesExhaustedWithDetailsException { - handlePrimaryException(e); - userListener.onException(e, bufferedMutator); - } - }; - - ExceptionListener secondaryErrorsListener = - new ExceptionListener() { - @Override - public void onException( - RetriesExhaustedWithDetailsException e, BufferedMutator bufferedMutator) - throws RetriesExhaustedWithDetailsException { - reportWriteErrors(e); - } - }; - - this.primaryBufferedMutator = - primaryConnection.getBufferedMutator( - copyMutatorParamsAndSetListener(bufferedMutatorParams, primaryErrorsListener)); - this.secondaryBufferedMutator = - secondaryConnection.getBufferedMutator( - copyMutatorParamsAndSetListener(bufferedMutatorParams, secondaryErrorsListener)); - this.flowController = flowController; - this.mutationsBufferFlushIntervalBytes = - configuration.mirroringOptions.bufferedMutatorBytesToFlush; - this.executorService = MoreExecutors.listeningDecorator(executorService); - this.configuration = configuration; - this.bufferedMutatorParams = bufferedMutatorParams; - this.secondaryWriteErrorConsumer = secondaryWriteErrorConsumer; - - this.mutationsBuffer = new ArrayList<>(); - this.reservations = new ArrayList<>(); - this.mirroringTracer = mirroringTracer; - } - - @Override - public TableName getName() { - return this.bufferedMutatorParams.getTableName(); - } - - @Override - public Configuration getConfiguration() { - return this.configuration; - } - - @Override - public void mutate(Mutation mutation) throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BUFFERED_MUTATOR_MUTATE)) { - mutateScoped(Collections.singletonList(mutation)); - } - } - - @Override - public void mutate(final List list) throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope( - HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST)) { - mutateScoped(list); - } - } - - private void mutateScoped(final List list) throws IOException { - IOException primaryException = null; - try { - this.mirroringTracer.spanFactory.wrapPrimaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - primaryBufferedMutator.mutate(list); - return null; - } - }, - HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST); - } catch (IOException e) { - primaryException = e; - } finally { - // This call might block - we have confirmed that mutate() calls on BufferedMutator from - // HBase client library might also block. - addSecondaryMutation(list, primaryException); - } - // Throw exceptions that were thrown by ExceptionListener on primary BufferedMutator which we - // have caught when calling flush. - throwExceptionIfAvailable(); - } - - /** - * This method is called from within {@code finally} block. Currently processed exception is - * passed in primaryException, if any. This method shouldn't throw any exception if - * primaryException != null. - */ - private void addSecondaryMutation( - List mutations, IOException primaryException) throws IOException { - RequestResourcesDescription resourcesDescription = new RequestResourcesDescription(mutations); - ListenableFuture reservationFuture = - flowController.asyncRequestResource(resourcesDescription); - - ResourceReservation reservation; - try { - try (Scope scope = this.mirroringTracer.spanFactory.flowControlScope()) { - reservation = reservationFuture.get(); - } - } catch (InterruptedException | ExecutionException e) { - // We won't write those mutations to secondary database, they should be reported to - // secondaryWriteErrorConsumer. - reportWriteErrors(mutations, e); - - setInterruptedFlagInInterruptedException(e); - if (primaryException != null) { - // We are currently in a finally block handling an exception, we shouldn't throw anything. - primaryException.addSuppressed(e); - return; - } else { - throw new IOException(e); - } - } - - synchronized (this) { - this.mutationsBuffer.addAll(mutations); - this.reservations.add(reservation); - this.mutationsBufferSizeBytes += resourcesDescription.sizeInBytes; - if (this.mutationsBufferSizeBytes > this.mutationsBufferFlushIntervalBytes) { - // We are not afraid of multiple simultaneous flushes: - // - HBase clients are thread-safe. - // - Each failed Row should be reported and placed in `failedPrimaryOperations` once. - // - Each issued Row will be consulted with `failedPrimaryOperations` only once, because - // each flush sets up a clean buffer for incoming mutations. - scheduleFlush(); - } - } - } - - private void reportWriteErrors(RetriesExhaustedWithDetailsException e) { - try (Scope scope = this.mirroringTracer.spanFactory.writeErrorScope()) { - for (int i = 0; i < e.getNumExceptions(); i++) { - this.secondaryWriteErrorConsumer.consume( - HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST, e.getRow(i), e.getCause(i)); - } - } - } - - private void reportWriteErrors(List mutations, Throwable cause) { - try (Scope scope = this.mirroringTracer.spanFactory.writeErrorScope()) { - this.secondaryWriteErrorConsumer.consume( - HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST, mutations, cause); - } - } - - @Override - public synchronized void close() throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BUFFERED_MUTATOR_CLOSE)) { - if (this.closed) { - this.mirroringTracer - .spanFactory - .getCurrentSpan() - .addAnnotation("MirroringBufferedMutator closed more than once."); - return; - } - this.closed = true; - - List exceptions = new ArrayList<>(); - - try { - scheduleFlush().secondaryFlushFinished.get(); - } catch (InterruptedException | ExecutionException e) { - setInterruptedFlagInInterruptedException(e); - exceptions.add(new IOException(e)); - } - try { - this.mirroringTracer.spanFactory.wrapPrimaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - MirroringBufferedMutator.this.primaryBufferedMutator.close(); - return null; - } - }, - HBaseOperation.BUFFERED_MUTATOR_CLOSE); - } catch (IOException e) { - exceptions.add(e); - } - try { - this.mirroringTracer.spanFactory.wrapSecondaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - MirroringBufferedMutator.this.secondaryBufferedMutator.close(); - return null; - } - }, - HBaseOperation.BUFFERED_MUTATOR_CLOSE); - } catch (IOException e) { - exceptions.add(e); - } - if (!exceptions.isEmpty()) { - Iterator exceptionIterator = exceptions.iterator(); - IOException firstException = exceptionIterator.next(); - while (exceptionIterator.hasNext()) { - firstException.addSuppressed(exceptionIterator.next()); - } - throw firstException; - } - } - } - - @Override - public void flush() throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BUFFERED_MUTATOR_FLUSH)) { - try { - scheduleFlush().primaryFlushFinished.get(); - } catch (InterruptedException | ExecutionException e) { - setInterruptedFlagInInterruptedException(e); - throw new IOException(e); - } - throwExceptionIfAvailable(); - } - } - - @Override - public long getWriteBufferSize() { - return this.bufferedMutatorParams.getWriteBufferSize(); - } - - private void handlePrimaryException(RetriesExhaustedWithDetailsException e) { - for (int i = 0; i < e.getNumExceptions(); i++) { - failedPrimaryOperations.add(e.getRow(i)); - } - } - - private BufferedMutatorParams copyMutatorParamsAndSetListener( - BufferedMutatorParams bufferedMutatorParams, ExceptionListener exceptionListener) { - BufferedMutatorParams params = new BufferedMutatorParams(bufferedMutatorParams.getTableName()); - params.writeBufferSize(bufferedMutatorParams.getWriteBufferSize()); - params.pool(bufferedMutatorParams.getPool()); - params.maxKeyValueSize(bufferedMutatorParams.getMaxKeyValueSize()); - params.listener(exceptionListener); - return params; - } - - private static class FlushFutures { - ListenableFuture primaryFlushFinished; - ListenableFuture secondaryFlushFinished; - - public FlushFutures( - ListenableFuture primaryFlushFinished, SettableFuture secondaryFlushFinished) { - this.primaryFlushFinished = primaryFlushFinished; - this.secondaryFlushFinished = secondaryFlushFinished; - } - } - - private synchronized FlushFutures scheduleFlush() { - try (Scope scope = this.mirroringTracer.spanFactory.scheduleFlushScope()) { - this.mutationsBufferSizeBytes = 0; - - final List dataToFlush = this.mutationsBuffer; - this.mutationsBuffer = new ArrayList<>(); - - final List flushReservations = this.reservations; - this.reservations = new ArrayList<>(); - - final SettableFuture secondaryFlushFinished = SettableFuture.create(); - - ListenableFuture primaryFlushFinished = - this.executorService.submit( - this.mirroringTracer.spanFactory.wrapWithCurrentSpan( - new Callable() { - @Override - public Void call() throws Exception { - MirroringBufferedMutator.this.mirroringTracer.spanFactory - .wrapPrimaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - MirroringBufferedMutator.this.primaryBufferedMutator.flush(); - return null; - } - }, - HBaseOperation.BUFFERED_MUTATOR_FLUSH); - return null; - } - })); - - Futures.addCallback( - primaryFlushFinished, - this.mirroringTracer.spanFactory.wrapWithCurrentSpan( - new FutureCallback() { - @Override - public void onSuccess(@NullableDecl Void aVoid) { - performSecondaryFlush(dataToFlush, flushReservations, secondaryFlushFinished); - } - - @Override - public void onFailure(Throwable throwable) { - if (throwable instanceof RetriesExhaustedWithDetailsException) { - // If user-defined listener has thrown an exception - // (RetriesExhaustedWithDetailsException is the only exception that can be - // thrown), we know that some of the writes failed. Our handler has already - // handled those errors. We should also rethrow this exception when user - // calls mutate/flush the next time. - saveExceptionToBeThrown((RetriesExhaustedWithDetailsException) throwable); - - performSecondaryFlush(dataToFlush, flushReservations, secondaryFlushFinished); - } else { - // In other cases, we do not know what caused the error and we have no idea - // what was really written to the primary DB, the best we can do is write - // them to on-disk log. Trying to save them to secondary database is not a - // good idea - if current thread was interrupted then next flush might also - // be, only increasing our confusion, moreover, that may cause secondary - // writes that were not completed on primary. - reportWriteErrors(dataToFlush, throwable); - releaseReservations(flushReservations); - secondaryFlushFinished.setException(throwable); - } - } - }), - MoreExecutors.directExecutor()); - return new FlushFutures(primaryFlushFinished, secondaryFlushFinished); - } - } - - private synchronized void saveExceptionToBeThrown( - RetriesExhaustedWithDetailsException exception) { - this.exceptionsToBeThrown.add(exception); - } - - private RetriesExhaustedWithDetailsException getExceptionsToBeThrown() { - List exceptions; - synchronized (this) { - if (this.exceptionsToBeThrown.isEmpty()) { - return null; - } - exceptions = this.exceptionsToBeThrown; - this.exceptionsToBeThrown = new ArrayList<>(); - } - - List rows = new ArrayList<>(); - List causes = new ArrayList<>(); - List hostnames = new ArrayList<>(); - - for (RetriesExhaustedWithDetailsException e : exceptions) { - for (int i = 0; i < e.getNumExceptions(); i++) { - rows.add(e.getRow(i)); - causes.add(e.getCause(i)); - hostnames.add(e.getHostnamePort(i)); - } - } - return new RetriesExhaustedWithDetailsException(causes, rows, hostnames); - } - - private void throwExceptionIfAvailable() throws RetriesExhaustedWithDetailsException { - RetriesExhaustedWithDetailsException e = getExceptionsToBeThrown(); - if (e != null) { - throw e; - } - } - - private void performSecondaryFlush( - List dataToFlush, - List flushReservations, - SettableFuture completionFuture) { - final List successfulOperations = removeFailedMutations(dataToFlush); - try { - if (!successfulOperations.isEmpty()) { - this.mirroringTracer.spanFactory.wrapSecondaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - secondaryBufferedMutator.mutate(successfulOperations); - return null; - } - }, - HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST); - - this.mirroringTracer.spanFactory.wrapSecondaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - secondaryBufferedMutator.flush(); - return null; - } - }, - HBaseOperation.BUFFERED_MUTATOR_FLUSH); - } - releaseReservations(flushReservations); - completionFuture.set(null); - } catch (Throwable e) { - // Our listener is registered and should catch non-fatal errors. This is either - // InterruptedIOException or some RuntimeError, in both cases we should consider operation as - // not completed - the worst that can happen is that we will have some writes in both - // secondary database and on-disk log. - reportWriteErrors(dataToFlush, e); - releaseReservations(flushReservations); - completionFuture.setException(e); - } - } - - private void releaseReservations(List flushReservations) { - for (ResourceReservation reservation : flushReservations) { - reservation.release(); - } - } - - private List removeFailedMutations(List dataToFlush) { - List successfulMutations = new ArrayList<>(); - for (Mutation mutation : dataToFlush) { - if (!this.failedPrimaryOperations.remove(mutation)) { - successfulMutations.add(mutation); - } - } - return successfulMutations; - } - - private void setInterruptedFlagInInterruptedException(Exception e) { - if (e instanceof InterruptedException) { - Thread.currentThread().interrupt(); - } - } -} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConfiguration.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConfiguration.java index 72c66fe94b..19ae744a46 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConfiguration.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConfiguration.java @@ -15,60 +15,44 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x; +import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper; import org.apache.hadoop.conf.Configuration; -public class MirroringConfiguration extends Configuration { - Configuration primaryConfiguration; - Configuration secondaryConfiguration; - MirroringOptions mirroringOptions; +@InternalApi("For internal use only") +public class MirroringConfiguration { + public final Configuration primaryConfiguration; + public final Configuration secondaryConfiguration; + public final MirroringOptions mirroringOptions; + public final Configuration baseConfiguration; - public MirroringConfiguration( - Configuration primaryConfiguration, - Configuration secondaryConfiguration, - Configuration mirroringConfiguration) { - super.set("hbase.client.connection.impl", MirroringConnection.class.getCanonicalName()); - this.primaryConfiguration = primaryConfiguration; - this.secondaryConfiguration = secondaryConfiguration; - this.mirroringOptions = new MirroringOptions(mirroringConfiguration); - } + public MirroringConfiguration(Configuration configuration) { + MirroringConfigurationHelper.checkParameters( + configuration, + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY); - public MirroringConfiguration(Configuration conf) { - super(conf); // Copy-constructor - // In case the user constructed MirroringConfiguration by hand. - if (conf instanceof MirroringConfiguration) { - MirroringConfiguration mirroringConfiguration = (MirroringConfiguration) conf; - this.primaryConfiguration = new Configuration(mirroringConfiguration.primaryConfiguration); - this.secondaryConfiguration = - new Configuration(mirroringConfiguration.secondaryConfiguration); - this.mirroringOptions = mirroringConfiguration.mirroringOptions; - } else { - MirroringConfigurationHelper.checkParameters( - conf, - MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY); - - final Configuration primaryConfiguration = - MirroringConfigurationHelper.extractPrefixedConfig( - MirroringConfigurationHelper.MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, conf); - MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( - primaryConfiguration, - conf, - MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, - "hbase.client.connection.impl"); - this.primaryConfiguration = primaryConfiguration; + final Configuration primaryConfiguration = + MirroringConfigurationHelper.extractPrefixedConfig( + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, configuration); + MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( + primaryConfiguration, + configuration, + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, + "hbase.client.connection.impl"); + this.primaryConfiguration = primaryConfiguration; - final Configuration secondaryConfiguration = - MirroringConfigurationHelper.extractPrefixedConfig( - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, conf); - MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( - secondaryConfiguration, - conf, - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, - "hbase.client.connection.impl"); - this.secondaryConfiguration = secondaryConfiguration; + final Configuration secondaryConfiguration = + MirroringConfigurationHelper.extractPrefixedConfig( + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, configuration); + MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( + secondaryConfiguration, + configuration, + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, + "hbase.client.connection.impl"); + this.secondaryConfiguration = secondaryConfiguration; - this.mirroringOptions = new MirroringOptions(conf); - } + this.mirroringOptions = new MirroringOptions(configuration); + this.baseConfiguration = configuration; } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConnection.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConnection.java index b6489345e0..3645bc3c97 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConnection.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringConnection.java @@ -15,27 +15,30 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x; +import com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutator; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.AccumulatedExceptions; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Appender; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Logger; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Serializer; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowControlStrategy; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.FailedMutationLogger; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.reflection.ReflectionConstructor; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Preconditions; +import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; import io.opencensus.common.Scope; import java.io.IOException; -import java.io.InterruptedIOException; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.TableName; @@ -51,20 +54,45 @@ public class MirroringConnection implements Connection { private static final com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger Log = new com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger(MirroringConnection.class); - private final FlowController flowController; - private final ExecutorService executorService; - private final MismatchDetector mismatchDetector; - private final ListenableReferenceCounter referenceCounter; - private final MirroringTracer mirroringTracer; - private final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer; - private final ReadSampler readSampler; - private final Logger failedWritesLogger; - private final MirroringConfiguration configuration; + protected final FlowController flowController; + protected final ExecutorService executorService; + protected final MismatchDetector mismatchDetector; + /** + * Counter of all asynchronous operations that are using the secondary connection. Incremented + * when scheduling operations by underlying {@link MirroringTable} and {@link + * MirroringResultScanner}. + */ + protected final ListenableReferenceCounter referenceCounter; + + protected final MirroringTracer mirroringTracer; + protected final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer; + protected final ReadSampler readSampler; + private final FailedMutationLogger failedMutationLogger; + protected final MirroringConfiguration configuration; private final Connection primaryConnection; private final Connection secondaryConnection; private final AtomicBoolean closed = new AtomicBoolean(false); private final AtomicBoolean aborted = new AtomicBoolean(false); - private final boolean performWritesConcurrently; + + /** + * Enables concurrent writes mode. Should always be enabled together with {@link + * #waitForSecondaryWrites}. + * + *

In this mode some of the writes ({@link org.apache.hadoop.hbase.client.Put}s, {@link + * org.apache.hadoop.hbase.client.Delete}s and {@link + * org.apache.hadoop.hbase.client.RowMutations}) performed using {@link + * org.apache.hadoop.hbase.client.Table} API will be performed concurrently and {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.ConcurrentMirroringBufferedMutator} + * instances will be returned by {@link + * org.apache.hadoop.hbase.client.Connection#getBufferedMutator}. Moreover, all operations in this + * mode wait for secondary database operation to finish before returning to the user (because + * {@link #waitForSecondaryWrites} should be set). + */ + protected final boolean performWritesConcurrently; + + protected final boolean waitForSecondaryWrites; + // Depending on configuration, ensures that mutations have an explicitly set timestamp. + protected final Timestamper timestamper; /** * The constructor called from {@link @@ -75,15 +103,40 @@ public class MirroringConnection implements Connection { * are passed back to the user. */ public MirroringConnection(Configuration conf, boolean managed, ExecutorService pool, User user) + throws Throwable { + this(new MirroringConfiguration(conf), pool, user); + // This is an always-false legacy hbase parameter. + Preconditions.checkArgument(!managed, "Mirroring client doesn't support managed connections."); + } + + public MirroringConnection(MirroringConfiguration mirroringConfiguration, ExecutorService pool) throws IOException { - assert !managed; // This is always-false legacy hbase parameter. - this.configuration = new MirroringConfiguration(conf); + this( + mirroringConfiguration, + pool, + ConnectionFactory.createConnection(mirroringConfiguration.primaryConfiguration, pool), + ConnectionFactory.createConnection(mirroringConfiguration.secondaryConfiguration, pool)); + } + + private MirroringConnection(MirroringConfiguration conf, ExecutorService pool, User user) + throws IOException { + this( + conf, + pool, + ConnectionFactory.createConnection(conf.primaryConfiguration, pool, user), + ConnectionFactory.createConnection(conf.secondaryConfiguration, pool, user)); + } + + private MirroringConnection( + MirroringConfiguration conf, + ExecutorService pool, + Connection primaryConnection, + Connection secondaryConnection) { + this.configuration = conf; this.mirroringTracer = new MirroringTracer(); - this.primaryConnection = - ConnectionFactory.createConnection(this.configuration.primaryConfiguration, pool, user); - this.secondaryConnection = - ConnectionFactory.createConnection(this.configuration.secondaryConfiguration, pool, user); + this.primaryConnection = primaryConnection; + this.secondaryConnection = secondaryConnection; if (pool == null) { this.executorService = Executors.newCachedThreadPool(); @@ -92,38 +145,64 @@ public MirroringConnection(Configuration conf, boolean managed, ExecutorService } referenceCounter = new ListenableReferenceCounter(); - this.flowController = - new FlowController( - ReflectionConstructor.construct( - this.configuration.mirroringOptions.flowControllerStrategyClass, - this.configuration.mirroringOptions)); - this.mismatchDetector = - ReflectionConstructor.construct( - this.configuration.mirroringOptions.mismatchDetectorClass, this.mirroringTracer); - - this.failedWritesLogger = - new Logger( - ReflectionConstructor.construct( - this.configuration.mirroringOptions.writeErrorLogAppenderClass, - Configuration.class, - this.configuration), - ReflectionConstructor.construct( - this.configuration.mirroringOptions.writeErrorLogSerializerClass)); - - final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer = - ReflectionConstructor.construct( - this.configuration.mirroringOptions.writeErrorConsumerClass, this.failedWritesLogger); - - this.secondaryWriteErrorConsumer = - new SecondaryWriteErrorConsumerWithMetrics( - this.mirroringTracer, secondaryWriteErrorConsumer); - this.readSampler = new ReadSampler(this.configuration.mirroringOptions.readSamplingRate); - this.performWritesConcurrently = this.configuration.mirroringOptions.performWritesConcurrently; + + try { + this.flowController = + new FlowController( + this.configuration + .mirroringOptions + .flowControllerStrategyFactoryClass + .newInstance() + .create(this.configuration.mirroringOptions)); + this.mismatchDetector = + this.configuration + .mirroringOptions + .mismatchDetectorFactoryClass + .newInstance() + .create( + this.mirroringTracer, configuration.mirroringOptions.maxLoggedBinaryValueLength); + + this.failedMutationLogger = + new FailedMutationLogger( + this.configuration + .mirroringOptions + .faillog + .writeErrorLogAppenderFactoryClass + .newInstance() + .create(this.configuration.mirroringOptions.faillog), + this.configuration + .mirroringOptions + .faillog + .writeErrorLogSerializerFactoryClass + .newInstance() + .create()); + + final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer = + this.configuration + .mirroringOptions + .writeErrorConsumerFactoryClass + .newInstance() + .create(this.failedMutationLogger); + + this.timestamper = + Timestamper.create(this.configuration.mirroringOptions.enableDefaultClientSideTimestamps); + + this.secondaryWriteErrorConsumer = + new SecondaryWriteErrorConsumerWithMetrics( + this.mirroringTracer, secondaryWriteErrorConsumer); + this.readSampler = new ReadSampler(this.configuration.mirroringOptions.readSamplingRate); + this.performWritesConcurrently = + this.configuration.mirroringOptions.performWritesConcurrently; + this.waitForSecondaryWrites = this.configuration.mirroringOptions.waitForSecondaryWrites; + } catch (Throwable throwable) { + // Throwable are thrown by `newInstance` and `create` methods. + throw new RuntimeException(throwable); + } } @Override public Configuration getConfiguration() { - return this.configuration; + return this.configuration.baseConfiguration; } @Override @@ -147,19 +226,20 @@ public Table call() throws IOException { }, HBaseOperation.GET_TABLE); Table secondaryTable = this.secondaryConnection.getTable(tableName); - MirroringTable table = - new MirroringTable( - primaryTable, - secondaryTable, - executorService, - this.mismatchDetector, - this.flowController, - this.secondaryWriteErrorConsumer, - this.readSampler, - this.performWritesConcurrently, - this.mirroringTracer); - this.referenceCounter.holdReferenceUntilClosing(table); - return table; + return new MirroringTable( + primaryTable, + secondaryTable, + executorService, + this.mismatchDetector, + this.flowController, + this.secondaryWriteErrorConsumer, + this.readSampler, + this.timestamper, + this.performWritesConcurrently, + this.waitForSecondaryWrites, + this.mirroringTracer, + this.referenceCounter, + this.configuration.mirroringOptions.maxLoggedBinaryValueLength); } } @@ -171,23 +251,23 @@ public BufferedMutator getBufferedMutator(TableName tableName) throws IOExceptio @Override public BufferedMutator getBufferedMutator(BufferedMutatorParams bufferedMutatorParams) throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.GET_BUFFERED_MUTATOR)) { - return new MirroringBufferedMutator( - primaryConnection, - secondaryConnection, - bufferedMutatorParams, - configuration, - flowController, - executorService, - secondaryWriteErrorConsumer, - mirroringTracer); - } + return MirroringBufferedMutator.create( + performWritesConcurrently, + primaryConnection, + secondaryConnection, + bufferedMutatorParams, + configuration, + flowController, + executorService, + secondaryWriteErrorConsumer, + referenceCounter, + timestamper, + mirroringTracer); } @Override public RegionLocator getRegionLocator(TableName tableName) throws IOException { - throw new UnsupportedOperationException(); + return this.primaryConnection.getRegionLocator(tableName); } @Override @@ -204,30 +284,28 @@ public void close() throws IOException { return; } - // TODO: we should add a timeout to prevent deadlock in case of a bug on our side. + final AccumulatedExceptions exceptions = new AccumulatedExceptions(); try { - closeMirroringConnectionAndWaitForAsyncOperations(); - } catch (InterruptedException e) { - IOException wrapperException = new InterruptedIOException(); - wrapperException.initCause(e); - throw wrapperException; - } finally { - try { - this.failedWritesLogger.close(); - } catch (Exception e) { - throw new IOException(e); - } - } - - AccumulatedExceptions exceptions = new AccumulatedExceptions(); - try { - this.primaryConnection.close(); + primaryConnection.close(); } catch (IOException e) { exceptions.add(e); } + CallableThrowingIOException closeSecondaryConnection = + new CallableThrowingIOException() { + @Override + public Void call() { + try { + secondaryConnection.close(); + } catch (IOException e) { + exceptions.add(e); + } + return null; + } + }; + try { - this.secondaryConnection.close(); + terminateSecondaryConnectionWithTimeout(closeSecondaryConnection); } catch (IOException e) { exceptions.add(e); } @@ -242,7 +320,7 @@ public boolean isClosed() { } @Override - public void abort(String s, Throwable throwable) { + public void abort(final String s, final Throwable throwable) { try (Scope scope = this.mirroringTracer.spanFactory.operationScope( HBaseOperation.MIRRORING_CONNECTION_ABORT)) { @@ -250,14 +328,23 @@ public void abort(String s, Throwable throwable) { return; } + primaryConnection.abort(s, throwable); + + final CallableThrowingIOException abortSecondaryConnection = + new CallableThrowingIOException() { + @Override + public Void call() { + secondaryConnection.abort(s, throwable); + return null; + } + }; try { - closeMirroringConnectionAndWaitForAsyncOperations(); - } catch (InterruptedException e) { - throw new RuntimeException(e); + terminateSecondaryConnectionWithTimeout(abortSecondaryConnection); + } catch (IOException e) { + if (e.getCause() instanceof InterruptedException) { + throw new RuntimeException(e.getCause()); + } } - - this.primaryConnection.abort(s, throwable); - this.secondaryConnection.abort(s, throwable); } } @@ -266,20 +353,54 @@ public boolean isAborted() { return this.aborted.get(); } + @VisibleForTesting public Connection getPrimaryConnection() { return this.primaryConnection; } + @VisibleForTesting public Connection getSecondaryConnection() { return this.secondaryConnection; } - private void closeMirroringConnectionAndWaitForAsyncOperations() throws InterruptedException { + private void terminateSecondaryConnectionWithTimeout( + final CallableThrowingIOException terminatingAction) throws IOException { + final SettableFuture terminationFinishedFuture = SettableFuture.create(); + + // The secondary termination action should be run after all in-flight requests are finished. + this.referenceCounter + .getOnLastReferenceClosed() + .addListener( + new Runnable() { + @Override + public void run() { + try { + terminatingAction.call(); + terminationFinishedFuture.set(null); + } catch (Throwable e) { + terminationFinishedFuture.setException(e); + } + } + }, + MoreExecutors.directExecutor()); + this.referenceCounter.decrementReferenceCount(); try { - this.referenceCounter.getOnLastReferenceClosed().get(); - } catch (ExecutionException e) { - throw new RuntimeException(e); + // Wait for in-flight requests to be finished but with a timeout to prevent deadlock. + terminationFinishedFuture.get( + this.configuration.mirroringOptions.connectionTerminationTimeoutMillis, + TimeUnit.MILLISECONDS); + } catch (ExecutionException | InterruptedException e) { + // If the secondary terminating action has thrown while we were waiting, the error will be + // propagated to the user. + throw new IOException(e); + } catch (TimeoutException e) { + // But if the timeout was reached, we just leave the operation pending. + Log.error( + "MirroringConnection#close() timed out. Some of operations on secondary " + + "database are still in-flight and might be lost, but are not cancelled and " + + "will be performed asynchronously until the program terminates."); + // This error is not reported to the user. } } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOperationException.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOperationException.java new file mode 100644 index 0000000000..c6c8714c34 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOperationException.java @@ -0,0 +1,133 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x; + +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper; +import com.google.common.base.Preconditions; +import com.google.common.base.Throwables; +import javax.annotation.Nullable; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; + +/** + * Provides additional context for exceptions thrown by mirrored operations when {@link + * MirroringConfigurationHelper.MIRRORING_SYNCHRONOUS_WRITES} is enabled. + * + *

Instances of this class are not thrown directly. These exceptions are attached as the root + * cause of the exception chain (as defined by {@link Exception#getCause()}) returned by the + * MirroringClient when it is set to synchronous mode. One can easily retrieve it using {@link + * MirroringOperationException#extractRootCause(Throwable)}. + * + *

If thrown exception is a {@link + * org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException} then {@link + * MirroringOperationException} is not added to it directly, but it is added to every exception in + * it. + */ +public class MirroringOperationException extends Exception { + public static class ExceptionDetails { + public final Throwable exception; + public final String hostnameAndPort; + + public ExceptionDetails(Throwable exception) { + this(exception, ""); + } + + public ExceptionDetails(Throwable exception, String hostnameAndPort) { + this.exception = exception; + this.hostnameAndPort = hostnameAndPort; + } + } + + public enum DatabaseIdentifier { + Primary, + Secondary, + Both + } + + /** Identifies which database failed. */ + public final DatabaseIdentifier databaseIdentifier; + /** + * Operation that failed. Might be null if no specific operation was related to this exception. + * + *

If operation failed on primary, then this operation is one of the operations provided by the + * user. + * + *

If operation failed on secondary, then this operation might not be one of them - this will + * be the case with {@link Increment} and {@link Append} operations that are mirrored using {@link + * Put} to mitigate possible inconsistencies. + */ + public final Row operation; + + /** + * Stores an exception that happened on secondary database along with hostnamePort if available. + * + *

If operation failed on both databases then this field can, but is not required to, have a + * value. + */ + public final ExceptionDetails secondaryException; + + private MirroringOperationException(DatabaseIdentifier databaseIdentifier, Row operation) { + this(databaseIdentifier, operation, null); + } + + private MirroringOperationException( + DatabaseIdentifier databaseIdentifier, Row operation, ExceptionDetails secondaryException) { + Preconditions.checkArgument( + secondaryException == null || databaseIdentifier == DatabaseIdentifier.Both); + this.databaseIdentifier = databaseIdentifier; + this.operation = operation; + this.secondaryException = secondaryException; + } + + public static T markedAsPrimaryException(T e, Row primaryOperation) { + return markedWith( + e, new MirroringOperationException(DatabaseIdentifier.Primary, primaryOperation)); + } + + public static T markedAsBothException( + T e, ExceptionDetails secondaryExceptionDetails, Row primaryOperation) { + return markedWith( + e, + new MirroringOperationException( + DatabaseIdentifier.Both, primaryOperation, secondaryExceptionDetails)); + } + + public static T markedAsSecondaryException(T e, Row secondaryOperation) { + return markedWith( + e, new MirroringOperationException(DatabaseIdentifier.Secondary, secondaryOperation)); + } + + private static T markedWith(T e, Throwable marker) { + Throwables.getRootCause(e).initCause(marker); + return e; + } + + /** + * Extracts {@link MirroringOperationException} instance from the bottom of chain of causes. + * + * @param exception Exception thrown by mirroring operation. + * @return {@link MirroringOperationException} instance if present, {@code null} otherwise. + */ + public static @Nullable MirroringOperationException extractRootCause(Throwable exception) { + Throwable rootCause = Throwables.getRootCause(exception); + if (rootCause instanceof MirroringOperationException) { + return (MirroringOperationException) rootCause; + } + return null; + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOptions.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOptions.java index 6968c1f162..adcb09b045 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOptions.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringOptions.java @@ -17,68 +17,148 @@ import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_CONCURRENT_WRITES; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_CLASS; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_MISMATCH_DETECTOR_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_CONNECTION_CONNECTION_TERMINATION_TIMEOUT; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_ENABLE_DEFAULT_CLIENT_SIDE_TIMESTAMPS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FAILLOG_DROP_ON_OVERFLOW_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FAILLOG_MAX_BUFFER_SIZE_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FAILLOG_PREFIX_PATH_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_OUTSTANDING_REQUESTS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_USED_BYTES; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_READ_VERIFICATION_RATE_PERCENT; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_WRITE_ERROR_CONSUMER_CLASS; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_WRITE_ERROR_LOG_APPENDER_CLASS; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_WRITE_ERROR_LOG_SERIALIZER_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SCANNER_BUFFERED_MISMATCHED_READS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SYNCHRONOUS_WRITES; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_WRITE_ERROR_CONSUMER_FACTORY_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_WRITE_ERROR_LOG_APPENDER_FACTORY_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_WRITE_ERROR_LOG_MAX_BINARY_VALUE_LENGTH; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_WRITE_ERROR_LOG_SERIALIZER_FACTORY_CLASS; import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.DefaultSecondaryWriteErrorConsumer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Appender; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.DefaultAppender; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.DefaultSerializer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Serializer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowControlStrategy; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestCountingFlowControlStrategy; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper.TimestampingMode; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.DefaultMismatchDetector; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.common.base.Preconditions; import org.apache.hadoop.conf.Configuration; @InternalApi("For internal use only") public class MirroringOptions { + public static class Faillog { + Faillog(Configuration configuration) { + this.prefixPath = configuration.get(MIRRORING_FAILLOG_PREFIX_PATH_KEY); + this.maxBufferSize = + configuration.getInt(MIRRORING_FAILLOG_MAX_BUFFER_SIZE_KEY, 20 * 1024 * 1024); + this.dropOnOverflow = configuration.getBoolean(MIRRORING_FAILLOG_DROP_ON_OVERFLOW_KEY, false); + this.writeErrorLogAppenderFactoryClass = + configuration.getClass( + MIRRORING_WRITE_ERROR_LOG_APPENDER_FACTORY_CLASS, + DefaultAppender.Factory.class, + Appender.Factory.class); + this.writeErrorLogSerializerFactoryClass = + configuration.getClass( + MIRRORING_WRITE_ERROR_LOG_SERIALIZER_FACTORY_CLASS, + DefaultSerializer.Factory.class, + Serializer.Factory.class); + } + + public final Class writeErrorLogAppenderFactoryClass; + public final Class writeErrorLogSerializerFactoryClass; + public final String prefixPath; + public final int maxBufferSize; + public final boolean dropOnOverflow; + } + private static final String HBASE_CLIENT_WRITE_BUFFER_KEY = "hbase.client.write.buffer"; - public final String mismatchDetectorClass; - public final String flowControllerStrategyClass; + + public final Class mismatchDetectorFactoryClass; + public final Class flowControllerStrategyFactoryClass; public final int flowControllerMaxOutstandingRequests; + public final int flowControllerMaxUsedBytes; public final long bufferedMutatorBytesToFlush; - public final String writeErrorConsumerClass; + public final Class writeErrorConsumerFactoryClass; + public final int maxLoggedBinaryValueLength; public final int readSamplingRate; + public final long connectionTerminationTimeoutMillis; - public final String writeErrorLogAppenderClass; - public final String writeErrorLogSerializerClass; - + /** + * Enables concurrent writes mode. Should always be enabled together with {@link + * #waitForSecondaryWrites}. + * + *

In this mode some of the writes ({@link org.apache.hadoop.hbase.client.Put}s, {@link + * org.apache.hadoop.hbase.client.Delete}s and {@link + * org.apache.hadoop.hbase.client.RowMutations}) performed using {@link + * org.apache.hadoop.hbase.client.Table} API will be performed concurrently and {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.ConcurrentMirroringBufferedMutator} + * instances will be returned by {@link + * org.apache.hadoop.hbase.client.Connection#getBufferedMutator}. Moreover, all operations in this + * mode wait for secondary database operation to finish before returning to the user (because + * {@link #waitForSecondaryWrites} should be set). + */ public final boolean performWritesConcurrently; + public final boolean waitForSecondaryWrites; + public final Faillog faillog; + + public final int resultScannerBufferedMismatchedResults; + public final TimestampingMode enableDefaultClientSideTimestamps; + public MirroringOptions(Configuration configuration) { - this.mismatchDetectorClass = - configuration.get( - MIRRORING_MISMATCH_DETECTOR_CLASS, DefaultMismatchDetector.class.getCanonicalName()); - this.flowControllerStrategyClass = - configuration.get( - MIRRORING_FLOW_CONTROLLER_STRATEGY_CLASS, - RequestCountingFlowControlStrategy.class.getCanonicalName()); + this.mismatchDetectorFactoryClass = + configuration.getClass( + MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS, + DefaultMismatchDetector.Factory.class, + MismatchDetector.Factory.class); + this.flowControllerStrategyFactoryClass = + configuration.getClass( + MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS, + RequestCountingFlowControlStrategy.Factory.class, + FlowControlStrategy.Factory.class); this.flowControllerMaxOutstandingRequests = - Integer.parseInt( - configuration.get(MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS, "500")); + configuration.getInt(MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_OUTSTANDING_REQUESTS, 500); + this.flowControllerMaxUsedBytes = + configuration.getInt(MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_USED_BYTES, 268435456); this.bufferedMutatorBytesToFlush = - Integer.parseInt( - configuration.get( - MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH, - configuration.get(HBASE_CLIENT_WRITE_BUFFER_KEY, "2097152"))); - this.writeErrorConsumerClass = - configuration.get( - MIRRORING_WRITE_ERROR_CONSUMER_CLASS, - DefaultSecondaryWriteErrorConsumer.class.getCanonicalName()); - this.readSamplingRate = - Integer.parseInt(configuration.get(MIRRORING_READ_VERIFICATION_RATE_PERCENT, "100")); + configuration.getInt( + MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH, + configuration.getInt(HBASE_CLIENT_WRITE_BUFFER_KEY, 2097152)); + this.writeErrorConsumerFactoryClass = + configuration.getClass( + MIRRORING_WRITE_ERROR_CONSUMER_FACTORY_CLASS, + DefaultSecondaryWriteErrorConsumer.Factory.class, + SecondaryWriteErrorConsumer.Factory.class); + this.readSamplingRate = configuration.getInt(MIRRORING_READ_VERIFICATION_RATE_PERCENT, 100); Preconditions.checkArgument(this.readSamplingRate >= 0); Preconditions.checkArgument(this.readSamplingRate <= 100); - this.writeErrorLogAppenderClass = - configuration.get( - MIRRORING_WRITE_ERROR_LOG_APPENDER_CLASS, DefaultAppender.class.getCanonicalName()); - this.writeErrorLogSerializerClass = - configuration.get( - MIRRORING_WRITE_ERROR_LOG_SERIALIZER_CLASS, DefaultSerializer.class.getCanonicalName()); + this.performWritesConcurrently = configuration.getBoolean(MIRRORING_CONCURRENT_WRITES, false); + this.waitForSecondaryWrites = configuration.getBoolean(MIRRORING_SYNCHRONOUS_WRITES, false); + this.connectionTerminationTimeoutMillis = + configuration.getLong(MIRRORING_CONNECTION_CONNECTION_TERMINATION_TIMEOUT, 60000); + + this.resultScannerBufferedMismatchedResults = + configuration.getInt(MIRRORING_SCANNER_BUFFERED_MISMATCHED_READS, 5); + Preconditions.checkArgument(this.resultScannerBufferedMismatchedResults >= 0); + + this.maxLoggedBinaryValueLength = + configuration.getInt(MIRRORING_WRITE_ERROR_LOG_MAX_BINARY_VALUE_LENGTH, 32); + Preconditions.checkArgument(this.maxLoggedBinaryValueLength >= 0); + + Preconditions.checkArgument( + !(this.performWritesConcurrently && !this.waitForSecondaryWrites), + "Performing writes concurrently and not waiting for writes is forbidden. " + + "It has no advantage over performing writes asynchronously and not waiting for them."); + this.faillog = new Faillog(configuration); + + this.enableDefaultClientSideTimestamps = + configuration.getEnum( + MIRRORING_ENABLE_DEFAULT_CLIENT_SIDE_TIMESTAMPS, TimestampingMode.inplace); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringResultScanner.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringResultScanner.java index e971e05432..17a7a85e42 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringResultScanner.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringResultScanner.java @@ -15,31 +15,29 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounterUtils.holdReferenceUntilCompletion; + import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable.RequestScheduler; import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncResultScannerWrapper; import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncResultScannerWrapper.ScannerRequestContext; -import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncTableWrapper; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.AccumulatedExceptions; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableCloseable; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.RequestScheduling; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.HierarchicalReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.VerificationContinuationFactory; import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Supplier; -import com.google.common.util.concurrent.FutureCallback; import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; import io.opencensus.common.Scope; import java.io.IOException; -import java.util.concurrent.ExecutionException; import java.util.concurrent.atomic.AtomicBoolean; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.client.AbstractClientScanner; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; @@ -53,45 +51,59 @@ * succeeded, asynchronously on secondary `ResultScanner`. */ @InternalApi("For internal usage only") -public class MirroringResultScanner extends AbstractClientScanner implements ListenableCloseable { - private static final Log log = LogFactory.getLog(MirroringResultScanner.class); +public class MirroringResultScanner extends AbstractClientScanner { private final MirroringTracer mirroringTracer; private final Scan originalScan; private final ResultScanner primaryResultScanner; private final AsyncResultScannerWrapper secondaryResultScannerWrapper; private final VerificationContinuationFactory verificationContinuationFactory; - private final ListenableReferenceCounter listenableReferenceCounter; + /** + * + * + *

    + *
  • HBase 1.x: Counter for MirroringConnection, MirroringTable and MirroringResultScanner. + *
  • HBase 2.x: Counter for MirroringAsyncConnection and MirroringResultScanner. + *
+ */ + private final HierarchicalReferenceCounter referenceCounter; + + private final boolean isVerificationEnabled; + private final RequestScheduler requestScheduler; + private final AtomicBoolean closed = new AtomicBoolean(false); + private final SettableFuture closedFuture = SettableFuture.create(); + /** - * Keeps track of number of entries already read from this scanner to provide context for - * MismatchDetectors. + * We use this object in a synchronized block to ensure that stateful asynchronous verification of + * scan results remains correct. */ - private int readEntries; + private final Object verificationLock = new Object(); - private FlowController flowController; - private boolean isVerificationEnabled; + private final MismatchDetector.ScannerResultVerifier scannerResultVerifier; public MirroringResultScanner( Scan originalScan, ResultScanner primaryResultScanner, - AsyncTableWrapper secondaryTableWrapper, + AsyncResultScannerWrapper secondaryResultScannerWrapper, VerificationContinuationFactory verificationContinuationFactory, - FlowController flowController, MirroringTracer mirroringTracer, - boolean isVerificationEnabled) - throws IOException { + boolean isVerificationEnabled, + RequestScheduler requestScheduler, + ReferenceCounter parentReferenceCounter, + int resultScannerBufferedMismatchedResults) { this.originalScan = originalScan; this.primaryResultScanner = primaryResultScanner; - this.secondaryResultScannerWrapper = secondaryTableWrapper.getScanner(originalScan); + this.secondaryResultScannerWrapper = secondaryResultScannerWrapper; this.verificationContinuationFactory = verificationContinuationFactory; - this.listenableReferenceCounter = new ListenableReferenceCounter(); - this.flowController = flowController; - this.readEntries = 0; - - this.listenableReferenceCounter.holdReferenceUntilClosing(this.secondaryResultScannerWrapper); + this.referenceCounter = new HierarchicalReferenceCounter(parentReferenceCounter); + this.requestScheduler = requestScheduler.withReferenceCounter(this.referenceCounter); this.mirroringTracer = mirroringTracer; this.isVerificationEnabled = isVerificationEnabled; + this.scannerResultVerifier = + this.verificationContinuationFactory + .getMismatchDetector() + .createScannerResultVerifier(this.originalScan, resultScannerBufferedMismatchedResults); } @Override @@ -108,19 +120,13 @@ public Result call() throws IOException { }, HBaseOperation.NEXT); - int startingIndex = this.readEntries; - this.readEntries += 1; ScannerRequestContext context = new ScannerRequestContext( - this.originalScan, - result, - startingIndex, - this.mirroringTracer.spanFactory.getCurrentSpan()); + this.originalScan, result, this.mirroringTracer.spanFactory.getCurrentSpan()); this.scheduleRequest( new RequestResourcesDescription(result), - this.secondaryResultScannerWrapper.next(context), - this.verificationContinuationFactory.scannerNext()); + this.secondaryResultScannerWrapper.next(context)); return result; } } @@ -139,100 +145,131 @@ public Result[] call() throws IOException { }, HBaseOperation.NEXT_MULTIPLE); - // TODO: remove this index, it doesn't tell the user anything. - int startingIndex = this.readEntries; - this.readEntries += entriesToRead; ScannerRequestContext context = new ScannerRequestContext( this.originalScan, results, - startingIndex, entriesToRead, this.mirroringTracer.spanFactory.getCurrentSpan()); this.scheduleRequest( new RequestResourcesDescription(results), - this.secondaryResultScannerWrapper.next(context), - this.verificationContinuationFactory.scannerNext()); + this.secondaryResultScannerWrapper.next(context)); return results; } } @Override public void close() { + closePrimaryAndScheduleSecondaryClose(); + } + + @VisibleForTesting + ListenableFuture closePrimaryAndScheduleSecondaryClose() { if (this.closed.getAndSet(true)) { - return; + return this.closedFuture; } + // We are freeing the initial reference to current level reference counter. + this.referenceCounter.current.decrementReferenceCount(); + // But we are scheduling asynchronous secondary operation and we should increment our parent's + // ref counter until this operation is finished. + holdReferenceUntilCompletion(this.referenceCounter.parent, this.closedFuture); + AccumulatedExceptions exceptionsList = new AccumulatedExceptions(); try { this.primaryResultScanner.close(); } catch (RuntimeException e) { exceptionsList.add(e); - } finally { - try { - this.asyncClose(); - } catch (RuntimeException e) { - log.error("Exception while scheduling this.close().", e); - exceptionsList.add(e); - } + } + + try { + // Close secondary wrapper (what will close secondary scanner) after all scheduled requests + // have finished. + this.referenceCounter + .current + .getOnLastReferenceClosed() + .addListener( + new Runnable() { + @Override + public void run() { + try { + scannerResultVerifier.flush(); + secondaryResultScannerWrapper.close(); + closedFuture.set(null); + } catch (RuntimeException e) { + closedFuture.setException(e); + } + } + }, + MoreExecutors.directExecutor()); + } catch (RuntimeException e) { + exceptionsList.add(e); } exceptionsList.rethrowAsRuntimeExceptionIfCaptured(); - } - @VisibleForTesting - ListenableFuture asyncClose() { - this.secondaryResultScannerWrapper.asyncClose(); - this.listenableReferenceCounter.decrementReferenceCount(); - return this.listenableReferenceCounter.getOnLastReferenceClosed(); + return this.closedFuture; } + /** + * Renews the lease on primary and secondary scanners, synchronously. If any of the {@link + * ResultScanner#renewLease()} calls returns a {@code false} we return {@code false}. If primary + * {@code renewLease()} succeeds and secondary fails we still return {@code false} and leave + * primary with renewed lease because we have no way of cancelling it - we assume that it will be + * cleaned up after it expires or when scanner is closed. + * + *

Bigtable client doesn't support this operation and throws {@link + * UnsupportedOperationException}, but in fact renewing leases is not needed in Bigtable, thus + * whenever we encounter {@link UnsupportedOperationException} we assume that is was bigtable + * throwing it and it means that renewing the lease is not needed and we treat as if it returned + * {@code true}. + */ @Override public boolean renewLease() { - boolean primaryLease = this.primaryResultScanner.renewLease(); - if (!primaryLease) { - return false; + try { + boolean primaryLease = this.primaryResultScanner.renewLease(); + if (!primaryLease) { + return false; + } + } catch (UnsupportedOperationException e) { + // We assume that UnsupportedOperationExceptions are thrown by Bigtable client and Bigtable + // doesn't need to renew scanner's leases. We are behaving as if it returned true. } try { - return this.secondaryResultScannerWrapper.renewLease().get(); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - return false; - } catch (ExecutionException e) { - log.error("Execution exception in secondaryResultScannerWrapper.renewLease().", e.getCause()); - return false; - } catch (Exception e) { - log.error("Exception while scheduling secondaryResultScannerWrapper.renewLease().", e); - return false; + // If secondary won't renew the lease we will return false even though primary has managed to + // renewed its lease. There is no way of cancelling it so we are just leaving it up to HBase + // to collect if after it expires again. + // This operation is not asynchronous because we need to forward its result and forward it + // to the user. Unfortunately we have to wait until mutex guarding the secondaryScanner is + // released. + return this.secondaryResultScannerWrapper.renewLease(); + } catch (UnsupportedOperationException e) { + // We assume that UnsupportedOperationExceptions are thrown by Bigtable client and Bigtable + // doesn't need to renew scanner's leases. We are behaving as if it returned true. + return true; } } @Override public ScanMetrics getScanMetrics() { - throw new UnsupportedOperationException(); + return this.primaryResultScanner.getScanMetrics(); } - private void scheduleRequest( + private void scheduleRequest( RequestResourcesDescription requestResourcesDescription, - Supplier> nextSupplier, - FutureCallback scannerNext) { + Supplier> nextSupplier) { if (!this.isVerificationEnabled) { return; } - this.listenableReferenceCounter.holdReferenceUntilCompletion( - RequestScheduling.scheduleRequestAndVerificationWithFlowControl( - requestResourcesDescription, - nextSupplier, - this.mirroringTracer.spanFactory.wrapReadVerificationCallback(scannerNext), - this.flowController, - this.mirroringTracer)); - } - - @Override - public void addOnCloseListener(Runnable listener) { - this.listenableReferenceCounter - .getOnLastReferenceClosed() - .addListener(listener, MoreExecutors.directExecutor()); + // requestScheduler handles reference counting of async requests. + this.requestScheduler.scheduleRequestWithCallback( + requestResourcesDescription, + nextSupplier, + this.mirroringTracer.spanFactory.wrapReadVerificationCallback( + this.verificationContinuationFactory.scannerNext( + this.verificationLock, + this.secondaryResultScannerWrapper.nextResultQueue, + this.scannerResultVerifier))); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringTable.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringTable.java index c8f43c227c..9661df48c3 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringTable.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/MirroringTable.java @@ -15,45 +15,46 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils.makePutFromResult; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils.emptyResult; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounterUtils.holdReferenceUntilCompletion; import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncTableWrapper; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.AccumulatedExceptions; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.FailedSuccessfulSplit; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.ReadWriteSplit; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOAndInterruptedException; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Batcher; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableCloseable; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.RequestScheduling; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.WriteOperationInfo; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.HierarchicalReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.VerificationContinuationFactory; -import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Function; +import com.google.common.base.Preconditions; import com.google.common.base.Predicate; import com.google.common.base.Supplier; import com.google.common.util.concurrent.FutureCallback; import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; import io.opencensus.common.Scope; import java.io.IOException; import java.io.InterruptedIOException; import java.util.ArrayList; -import java.util.Collections; import java.util.List; import java.util.Map; import java.util.concurrent.ExecutorService; import java.util.concurrent.atomic.AtomicBoolean; -import javax.annotation.Nullable; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; @@ -67,7 +68,6 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; -import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; import org.apache.hadoop.hbase.client.Row; import org.apache.hadoop.hbase.client.RowMutations; import org.apache.hadoop.hbase.client.Scan; @@ -90,7 +90,8 @@ * asynchronously. Read operations are mirrored to verify that content of both databases matches. */ @InternalApi("For internal usage only") -public class MirroringTable implements Table, ListenableCloseable { +public class MirroringTable implements Table { + private static final Logger Log = new Logger(MirroringTable.class); private static final Predicate resultIsFaultyPredicate = new Predicate() { @@ -99,20 +100,21 @@ public boolean apply(@NullableDecl Object o) { return o == null || o instanceof Throwable; } }; - - private final Table primaryTable; - private final Table secondaryTable; + protected final Table primaryTable; private final AsyncTableWrapper secondaryAsyncWrapper; private final VerificationContinuationFactory verificationContinuationFactory; - private final FlowController flowController; - private final ListenableReferenceCounter referenceCounter; - private final AtomicBoolean closed = new AtomicBoolean(false); + /** Counter for MirroringConnection and MirroringTable. */ + private final HierarchicalReferenceCounter referenceCounter; private final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer; private final MirroringTracer mirroringTracer; - private final ReadSampler readSampler; - private final boolean performWritesConcurrently; + private final RequestScheduler requestScheduler; + private final Batcher batcher; + private final AtomicBoolean closed = new AtomicBoolean(false); + private final SettableFuture closedFuture = SettableFuture.create(); + private final int resultScannerBufferedMismatchedResults; + private final Timestamper timestamper; /** * @param executorService ExecutorService is used to perform operations on secondaryTable and * verification tasks. @@ -128,38 +130,41 @@ public MirroringTable( FlowController flowController, SecondaryWriteErrorConsumer secondaryWriteErrorConsumer, ReadSampler readSampler, + Timestamper timestamper, boolean performWritesConcurrently, - MirroringTracer mirroringTracer) { + boolean waitForSecondaryWrites, + MirroringTracer mirroringTracer, + ReferenceCounter parentReferenceCounter, + int resultScannerBufferedMismatchedResults) { this.primaryTable = primaryTable; - this.secondaryTable = secondaryTable; this.verificationContinuationFactory = new VerificationContinuationFactory(mismatchDetector); this.readSampler = readSampler; this.secondaryAsyncWrapper = new AsyncTableWrapper( - this.secondaryTable, - MoreExecutors.listeningDecorator(executorService), - mirroringTracer); - this.flowController = flowController; - this.referenceCounter = new ListenableReferenceCounter(); - this.referenceCounter.holdReferenceUntilClosing(this.secondaryAsyncWrapper); + secondaryTable, MoreExecutors.listeningDecorator(executorService), mirroringTracer); + this.referenceCounter = new HierarchicalReferenceCounter(parentReferenceCounter); this.secondaryWriteErrorConsumer = secondaryWriteErrorConsumer; - this.performWritesConcurrently = performWritesConcurrently; + Preconditions.checkArgument( + !(performWritesConcurrently && !waitForSecondaryWrites), + "If concurrent writes are enabled, then waiting for secondary writes should also be enabled."); this.mirroringTracer = mirroringTracer; - } - - @Override - public TableName getName() { - return this.primaryTable.getName(); - } - - @Override - public Configuration getConfiguration() { - throw new UnsupportedOperationException(); - } - - @Override - public HTableDescriptor getTableDescriptor() throws IOException { - throw new UnsupportedOperationException(); + this.requestScheduler = + new RequestScheduler(flowController, this.mirroringTracer, this.referenceCounter); + this.timestamper = timestamper; + this.batcher = + new Batcher( + this.primaryTable, + this.secondaryAsyncWrapper, + this.requestScheduler, + this.secondaryWriteErrorConsumer, + this.verificationContinuationFactory, + this.readSampler, + this.timestamper, + resultIsFaultyPredicate, + waitForSecondaryWrites, + performWritesConcurrently, + this.mirroringTracer); + this.resultScannerBufferedMismatchedResults = resultScannerBufferedMismatchedResults; } @Override @@ -172,17 +177,15 @@ public boolean exists(final Get get) throws IOException { new CallableThrowingIOException() { @Override public Boolean call() throws IOException { - return MirroringTable.this.primaryTable.exists(get); + return primaryTable.exists(get); } }, HBaseOperation.EXISTS); - if (this.readSampler.shouldNextReadOperationBeSampled()) { - scheduleSequentialReadOperationWithVerification( - new RequestResourcesDescription(result), - this.secondaryAsyncWrapper.exists(get), - this.verificationContinuationFactory.exists(get, result)); - } + scheduleSequentialReadOperationWithVerification( + new RequestResourcesDescription(result), + this.secondaryAsyncWrapper.exists(get), + this.verificationContinuationFactory.exists(get, result)); return result; } } @@ -198,62 +201,19 @@ public boolean[] existsAll(final List inputList) throws IOException { new CallableThrowingIOException() { @Override public boolean[] call() throws IOException { - return MirroringTable.this.primaryTable.existsAll(list); + return primaryTable.existsAll(list); } }, HBaseOperation.EXISTS_ALL); - if (this.readSampler.shouldNextReadOperationBeSampled()) { - scheduleSequentialReadOperationWithVerification( - new RequestResourcesDescription(result), - this.secondaryAsyncWrapper.existsAll(list), - this.verificationContinuationFactory.existsAll(list, result)); - } + scheduleSequentialReadOperationWithVerification( + new RequestResourcesDescription(result), + this.secondaryAsyncWrapper.existsAll(list), + this.verificationContinuationFactory.existsAll(list, result)); return result; } } - @Override - public void batch(List operations, Object[] results) - throws IOException, InterruptedException { - try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BATCH)) { - batchWithSpan(operations, results); - } - } - - @Override - public Object[] batch(List operations) throws IOException, InterruptedException { - Log.trace("[%s] batch(operations=%s)", this.getName(), operations); - Object[] results = new Object[operations.size()]; - this.batch(operations, results); - return results; - } - - @Override - public void batchCallback( - List inputOperations, Object[] results, final Callback callback) - throws IOException, InterruptedException { - final List operations = new ArrayList<>(inputOperations); - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BATCH_CALLBACK)) { - Log.trace( - "[%s] batchCallback(operations=%s, results, callback=%s)", - this.getName(), operations, callback); - - batchWithSpan(operations, results, callback); - } - } - - @Override - public Object[] batchCallback(List operations, Callback callback) - throws IOException, InterruptedException { - Log.trace( - "[%s] batchCallback(operations=%s, callback=%s)", this.getName(), operations, callback); - Object[] results = new Object[operations.size()]; - this.batchCallback(operations, results, callback); - return results; - } - @Override public Result get(final Get get) throws IOException { try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.GET)) { @@ -264,17 +224,15 @@ public Result get(final Get get) throws IOException { new CallableThrowingIOException() { @Override public Result call() throws IOException { - return MirroringTable.this.primaryTable.get(get); + return primaryTable.get(get); } }, HBaseOperation.GET); - if (this.readSampler.shouldNextReadOperationBeSampled()) { - scheduleSequentialReadOperationWithVerification( - new RequestResourcesDescription(result), - this.secondaryAsyncWrapper.get(get), - this.verificationContinuationFactory.get(get, result)); - } + scheduleSequentialReadOperationWithVerification( + new RequestResourcesDescription(result), + this.secondaryAsyncWrapper.get(get), + this.verificationContinuationFactory.get(get, result)); return result; } } @@ -290,17 +248,15 @@ public Result[] get(final List inputList) throws IOException { new CallableThrowingIOException() { @Override public Result[] call() throws IOException { - return MirroringTable.this.primaryTable.get(list); + return primaryTable.get(list); } }, HBaseOperation.GET_LIST); - if (this.readSampler.shouldNextReadOperationBeSampled()) { - scheduleSequentialReadOperationWithVerification( - new RequestResourcesDescription(result), - this.secondaryAsyncWrapper.get(list), - this.verificationContinuationFactory.get(list, result)); - } + scheduleSequentialReadOperationWithVerification( + new RequestResourcesDescription(result), + this.secondaryAsyncWrapper.get(list), + this.verificationContinuationFactory.get(list, result)); return result; } } @@ -314,12 +270,13 @@ public ResultScanner getScanner(Scan scan) throws IOException { new MirroringResultScanner( scan, this.primaryTable.getScanner(scan), - this.secondaryAsyncWrapper, + this.secondaryAsyncWrapper.getScanner(scan), this.verificationContinuationFactory, - this.flowController, this.mirroringTracer, - this.readSampler.shouldNextReadOperationBeSampled()); - this.referenceCounter.holdReferenceUntilClosing(scanner); + this.readSampler.shouldNextReadOperationBeSampled(), + this.requestScheduler, + this.referenceCounter, + this.resultScannerBufferedMismatchedResults); return scanner; } } @@ -334,60 +291,11 @@ public ResultScanner getScanner(byte[] family, byte[] qualifier) throws IOExcept return getScanner(new Scan().addColumn(family, qualifier)); } - /** - * `close()` won't perform the actual close if there are any in-flight requests, in such a case - * the `close` operation is scheduled and will be performed after all requests have finished. - */ - @Override - public void close() throws IOException { - this.asyncClose(); - } - - @VisibleForTesting - ListenableFuture asyncClose() throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.TABLE_CLOSE)) { - if (this.closed.getAndSet(true)) { - return this.referenceCounter.getOnLastReferenceClosed(); - } - - this.referenceCounter.decrementReferenceCount(); - - AccumulatedExceptions exceptionsList = new AccumulatedExceptions(); - try { - this.mirroringTracer.spanFactory.wrapPrimaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - MirroringTable.this.primaryTable.close(); - return null; - } - }, - HBaseOperation.TABLE_CLOSE); - } catch (IOException e) { - exceptionsList.add(e); - } - - try { - this.secondaryAsyncWrapper.asyncClose(); - } catch (RuntimeException e) { - exceptionsList.add(e); - } - - exceptionsList.rethrowIfCaptured(); - return this.referenceCounter.getOnLastReferenceClosed(); - } finally { - this.mirroringTracer.spanFactory.asyncCloseSpanWhenCompleted( - this.referenceCounter.getOnLastReferenceClosed()); - } - } - - // TODO: add a config option to fill a timestamp if not present. @Override public void put(final Put put) throws IOException { try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.PUT)) { Log.trace("[%s] put(put=%s)", this.getName(), put); - this.batchSingleWriteOperation(put); + this.batcher.batchSingleWriteOperation(put); } } @@ -397,7 +305,7 @@ public void put(List puts) throws IOException { Log.trace("[%s] put(puts=%s)", this.getName(), puts); try { Object[] results = new Object[puts.size()]; - this.batchWithSpan(puts, results); + this.batcher.batch(puts, results); } catch (InterruptedException e) { IOException e2 = new InterruptedIOException(); e2.initCause(e); @@ -406,35 +314,11 @@ public void put(List puts) throws IOException { } } - @Override - public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, byte[] value, Put put) - throws IOException { - Log.trace( - "[%s] checkAndPut(row=%s, family=%s, qualifier=%s, value=%s, put=%s)", - this.getName(), row, family, qualifier, value, put); - return this.checkAndPut(row, family, qualifier, CompareOp.EQUAL, value, put); - } - - @Override - public boolean checkAndPut( - byte[] row, byte[] family, byte[] qualifier, CompareOp compareOp, byte[] value, Put put) - throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.CHECK_AND_PUT)) { - Log.trace( - "[%s] checkAndPut(row=%s, family=%s, qualifier=%s, compareOp=%s, value=%s, put=%s)", - this.getName(), row, family, qualifier, compareOp, value, put); - RowMutations mutations = new RowMutations(row); - mutations.add(put); - return this.checkAndMutateWithSpan(row, family, qualifier, compareOp, value, mutations); - } - } - @Override public void delete(final Delete delete) throws IOException { try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.DELETE)) { Log.trace("[%s] delete(delete=%s)", this.getName(), delete); - this.batchSingleWriteOperation(delete); + this.batcher.batchSingleWriteOperation(delete); } } @@ -443,53 +327,30 @@ public void delete(List deletes) throws IOException { try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.DELETE_LIST)) { Log.trace("[%s] delete(deletes=%s)", this.getName(), deletes); - // Delete should remove successfully deleted rows from input list. Object[] results = new Object[deletes.size()]; try { - this.batchWithSpan(deletes, results); + this.batcher.batch(deletes, results); } catch (InterruptedException e) { IOException e2 = new InterruptedIOException(); e2.initCause(e); throw e2; } finally { - final FailedSuccessfulSplit failedSuccessfulSplit = - new FailedSuccessfulSplit<>(deletes, results, resultIsFaultyPredicate); + final FailedSuccessfulSplit failedSuccessfulSplit = + new FailedSuccessfulSplit<>(deletes, results, resultIsFaultyPredicate, Object.class); + // Delete should remove successful operations from input list. + // To conform to this requirement we are clearing the list and re-adding failed deletes. deletes.clear(); deletes.addAll(failedSuccessfulSplit.failedOperations); } } } - @Override - public boolean checkAndDelete( - byte[] row, byte[] family, byte[] qualifier, byte[] value, Delete delete) throws IOException { - Log.trace( - "[%s] checkAndDelete(row=%s, family=%s, qualifier=%s, value=%s, delete=%s)", - this.getName(), row, family, qualifier, value, delete); - return this.checkAndDelete(row, family, qualifier, CompareOp.EQUAL, value, delete); - } - - @Override - public boolean checkAndDelete( - byte[] row, byte[] family, byte[] qualifier, CompareOp compareOp, byte[] value, Delete delete) - throws IOException { - try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.CHECK_AND_DELETE)) { - Log.trace( - "[%s] checkAndDelete(row=%s, family=%s, qualifier=%s, compareOp=%s, value=%s, delete=%s)", - this.getName(), row, family, qualifier, compareOp, value, delete); - RowMutations mutations = new RowMutations(row); - mutations.add(delete); - return this.checkAndMutateWithSpan(row, family, qualifier, compareOp, value, mutations); - } - } - @Override public void mutateRow(final RowMutations rowMutations) throws IOException { try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.MUTATE_ROW)) { Log.trace("[%s] mutateRow(rowMutations=%s)", this.getName(), rowMutations); - batchSingleWriteOperation(rowMutations); + this.batcher.batchSingleWriteOperation(rowMutations); } } @@ -497,47 +358,51 @@ public void mutateRow(final RowMutations rowMutations) throws IOException { public Result append(final Append append) throws IOException { try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.APPEND)) { Log.trace("[%s] append(append=%s)", this.getName(), append); + boolean wantsResults = append.isReturnResults(); + append.setReturnResults(true); Result result = this.mirroringTracer.spanFactory.wrapPrimaryOperation( new CallableThrowingIOException() { @Override public Result call() throws IOException { - return MirroringTable.this.primaryTable.append(append); + return primaryTable.append(append); } }, HBaseOperation.APPEND); - Put put = makePutFromResult(result); + Put put = OperationUtils.makePutFromResult(result); scheduleSequentialWriteOperation( new WriteOperationInfo(put), this.secondaryAsyncWrapper.put(put)); - return result; + + // HBase's append() returns null when isReturnResults is false. + return wantsResults ? result : null; } } @Override public Result increment(final Increment increment) throws IOException { - // TODO: bug - we should force increment to return results - // increment.setReturnResults(true); try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.INCREMENT)) { Log.trace("[%s] increment(increment=%s)", this.getName(), increment); + boolean wantsResults = increment.isReturnResults(); + increment.setReturnResults(true); Result result = this.mirroringTracer.spanFactory.wrapPrimaryOperation( new CallableThrowingIOException() { @Override public Result call() throws IOException { - return MirroringTable.this.primaryTable.increment(increment); + return primaryTable.increment(increment); } }, HBaseOperation.INCREMENT); - Put put = makePutFromResult(result); + Put put = OperationUtils.makePutFromResult(result); scheduleSequentialWriteOperation( new WriteOperationInfo(put), this.secondaryAsyncWrapper.put(put)); - return result; + return wantsResults ? result : emptyResult(); } } @@ -549,7 +414,7 @@ public long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, lo this.getName(), row, family, qualifier, amount); Result result = increment((new Increment(row)).addColumn(family, qualifier, amount)); Cell cell = result.getColumnLatestCell(family, qualifier); - assert cell != null; + Preconditions.checkNotNull(cell); return Bytes.toLong(CellUtil.cloneValue(cell)); } @@ -564,55 +429,116 @@ public long incrementColumnValue( increment( (new Increment(row)).addColumn(family, qualifier, amount).setDurability(durability)); Cell cell = result.getColumnLatestCell(family, qualifier); - assert cell != null; + Preconditions.checkNotNull(cell); return Bytes.toLong(CellUtil.cloneValue(cell)); } @Override - public CoprocessorRpcChannel coprocessorService(byte[] bytes) { - throw new UnsupportedOperationException(); + public void batch(List operations, Object[] results) + throws IOException, InterruptedException { + try (Scope scope = this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BATCH)) { + this.batcher.batch(operations, results); + } } @Override - public Map coprocessorService( - Class aClass, byte[] bytes, byte[] bytes1, Call call) throws Throwable { - throw new UnsupportedOperationException(); + public Object[] batch(List operations) throws IOException, InterruptedException { + Log.trace("[%s] batch(operations=%s)", this.getName(), operations); + Object[] results = new Object[operations.size()]; + this.batch(operations, results); + return results; } @Override - public void coprocessorService( - Class aClass, byte[] bytes, byte[] bytes1, Call call, Callback callback) - throws Throwable { - throw new UnsupportedOperationException(); + public void batchCallback( + List inputOperations, Object[] results, final Callback callback) + throws IOException, InterruptedException { + final List operations = new ArrayList<>(inputOperations); + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BATCH_CALLBACK)) { + Log.trace( + "[%s] batchCallback(operations=%s, results, callback=%s)", + this.getName(), operations, callback); + + this.batcher.batch(operations, results, callback); + } } @Override - public long getWriteBufferSize() { - throw new UnsupportedOperationException(); + public Object[] batchCallback(List operations, Callback callback) + throws IOException, InterruptedException { + Log.trace( + "[%s] batchCallback(operations=%s, callback=%s)", this.getName(), operations, callback); + Object[] results = new Object[operations.size()]; + this.batchCallback(operations, results, callback); + return results; } @Override - public void setWriteBufferSize(long l) throws IOException { - throw new UnsupportedOperationException(); + public boolean checkAndMutate( + byte[] row, + byte[] family, + byte[] qualifier, + CompareOp compareOp, + byte[] value, + RowMutations rowMutations) + throws IOException { + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.CHECK_AND_MUTATE)) { + Log.trace( + "[%s] checkAndMutate(row=%s, family=%s, qualifier=%s, compareOp=%s, value=%s, rowMutations=%s)", + this.getName(), row, family, qualifier, compareOp, value, rowMutations); + + return checkAndMutateWithSpan(row, family, qualifier, compareOp, value, rowMutations); + } } @Override - public Map batchCoprocessorService( - MethodDescriptor methodDescriptor, Message message, byte[] bytes, byte[] bytes1, R r) - throws Throwable { - throw new UnsupportedOperationException(); + public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, byte[] value, Put put) + throws IOException { + Log.trace( + "[%s] checkAndPut(row=%s, family=%s, qualifier=%s, value=%s, put=%s)", + this.getName(), row, family, qualifier, value, put); + return this.checkAndPut(row, family, qualifier, CompareOp.EQUAL, value, put); } @Override - public void batchCoprocessorService( - MethodDescriptor methodDescriptor, - Message message, - byte[] bytes, - byte[] bytes1, - R r, - Callback callback) - throws Throwable { - throw new UnsupportedOperationException(); + public boolean checkAndPut( + byte[] row, byte[] family, byte[] qualifier, CompareOp compareOp, byte[] value, Put put) + throws IOException { + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.CHECK_AND_PUT)) { + Log.trace( + "[%s] checkAndPut(row=%s, family=%s, qualifier=%s, compareOp=%s, value=%s, put=%s)", + this.getName(), row, family, qualifier, compareOp, value, put); + RowMutations mutations = new RowMutations(row); + mutations.add(put); + return this.checkAndMutateWithSpan(row, family, qualifier, compareOp, value, mutations); + } + } + + @Override + public boolean checkAndDelete( + byte[] row, byte[] family, byte[] qualifier, byte[] value, Delete delete) throws IOException { + Log.trace( + "[%s] checkAndDelete(row=%s, family=%s, qualifier=%s, value=%s, delete=%s)", + this.getName(), row, family, qualifier, value, delete); + return this.checkAndDelete(row, family, qualifier, CompareOp.EQUAL, value, delete); + } + + @Override + public boolean checkAndDelete( + byte[] row, byte[] family, byte[] qualifier, CompareOp compareOp, byte[] value, Delete delete) + throws IOException { + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.CHECK_AND_DELETE)) { + Log.trace( + "[%s] checkAndDelete(row=%s, family=%s, qualifier=%s, compareOp=%s, value=%s, delete=%s)", + this.getName(), row, family, qualifier, compareOp, value, delete); + RowMutations mutations = new RowMutations(row); + mutations.add(delete); + return this.checkAndMutateWithSpan(row, family, qualifier, compareOp, value, mutations); + } } private boolean checkAndMutateWithSpan( @@ -623,12 +549,13 @@ private boolean checkAndMutateWithSpan( final byte[] value, final RowMutations rowMutations) throws IOException { + this.timestamper.fillTimestamp(rowMutations); boolean wereMutationsApplied = this.mirroringTracer.spanFactory.wrapPrimaryOperation( new CallableThrowingIOException() { @Override public Boolean call() throws IOException { - return MirroringTable.this.primaryTable.checkAndMutate( + return primaryTable.checkAndMutate( row, family, qualifier, compareOp, value, rowMutations); } }, @@ -641,412 +568,279 @@ public Boolean call() throws IOException { return wereMutationsApplied; } + /** + * Synchronously {@link Table#close()}s primary table and schedules closing of the secondary table + * after finishing all secondary requests that are yet in-flight ({@link + * AsyncTableWrapper#close()}). + */ @Override - public boolean checkAndMutate( - byte[] row, - byte[] family, - byte[] qualifier, - CompareOp compareOp, - byte[] value, - RowMutations rowMutations) - throws IOException { + public void close() throws IOException { + this.closePrimaryAndScheduleSecondaryClose(); + } + + private void closePrimaryAndScheduleSecondaryClose() throws IOException { try (Scope scope = - this.mirroringTracer.spanFactory.operationScope(HBaseOperation.CHECK_AND_MUTATE)) { - Log.trace( - "[%s] checkAndMutate(row=%s, family=%s, qualifier=%s, compareOp=%s, value=%s, rowMutations=%s)", - this.getName(), row, family, qualifier, compareOp, value, rowMutations); + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.TABLE_CLOSE)) { + if (this.closed.getAndSet(true)) { + return; + } - return checkAndMutateWithSpan(row, family, qualifier, compareOp, value, rowMutations); + // We are freeing the initial reference to current level reference counter. + this.referenceCounter.current.decrementReferenceCount(); + // But we are scheduling asynchronous secondary operation and we should increment our parent's + // ref counter until this operation is finished. + holdReferenceUntilCompletion(this.referenceCounter.parent, this.closedFuture); + + AccumulatedExceptions exceptionsList = new AccumulatedExceptions(); + try { + this.mirroringTracer.spanFactory.wrapPrimaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + primaryTable.close(); + return null; + } + }, + HBaseOperation.TABLE_CLOSE); + } catch (IOException e) { + exceptionsList.add(e); + } + + try { + // Close secondary wrapper (what will close secondary table) after all scheduled requests + // have finished. + this.referenceCounter + .current + .getOnLastReferenceClosed() + .addListener( + new Runnable() { + @Override + public void run() { + try { + secondaryAsyncWrapper.close(); + closedFuture.set(null); + } catch (IOException e) { + closedFuture.setException(e); + } + } + }, + MoreExecutors.directExecutor()); + } catch (RuntimeException e) { + exceptionsList.add(e); + } + + exceptionsList.rethrowIfCaptured(); + } finally { + this.mirroringTracer.spanFactory.asyncCloseSpanWhenCompleted( + this.referenceCounter.current.getOnLastReferenceClosed()); } } - @Override - public void setOperationTimeout(int i) { - throw new UnsupportedOperationException(); + private void scheduleSequentialReadOperationWithVerification( + final RequestResourcesDescription resourcesDescription, + final Supplier> secondaryOperationSupplier, + final FutureCallback verificationCallback) { + if (!this.readSampler.shouldNextReadOperationBeSampled()) { + return; + } + this.requestScheduler.scheduleRequestWithCallback( + resourcesDescription, + secondaryOperationSupplier, + this.mirroringTracer.spanFactory.wrapReadVerificationCallback(verificationCallback)); } - @Override - public int getOperationTimeout() { - throw new UnsupportedOperationException(); + private void scheduleSequentialWriteOperation( + final WriteOperationInfo writeOperationInfo, + final Supplier> secondaryOperationSupplier) { + WriteOperationFutureCallback writeErrorCallback = + new WriteOperationFutureCallback() { + @Override + public void onFailure(Throwable throwable) { + secondaryWriteErrorConsumer.consume( + writeOperationInfo.hBaseOperation, writeOperationInfo.operations, throwable); + } + }; + + // If flow controller errs and won't allow the request we will handle the error using this + // handler. + Function flowControlReservationErrorConsumer = + new Function() { + @Override + public Void apply(Throwable throwable) { + secondaryWriteErrorConsumer.consume( + writeOperationInfo.hBaseOperation, writeOperationInfo.operations, throwable); + return null; + } + }; + + this.requestScheduler.scheduleRequestWithCallback( + writeOperationInfo.requestResourcesDescription, + secondaryOperationSupplier, + this.mirroringTracer.spanFactory.wrapWriteOperationCallback( + writeOperationInfo.hBaseOperation, this.mirroringTracer, writeErrorCallback), + flowControlReservationErrorConsumer); } @Override - public int getRpcTimeout() { - throw new UnsupportedOperationException(); + public TableName getName() { + return this.primaryTable.getName(); } @Override - public void setRpcTimeout(int i) { - throw new UnsupportedOperationException(); + public Configuration getConfiguration() { + return this.primaryTable.getConfiguration(); } @Override - public int getReadRpcTimeout() { - throw new UnsupportedOperationException(); + public HTableDescriptor getTableDescriptor() throws IOException { + return this.primaryTable.getTableDescriptor(); } @Override - public void setReadRpcTimeout(int i) { + public CoprocessorRpcChannel coprocessorService(byte[] bytes) { throw new UnsupportedOperationException(); } @Override - public int getWriteRpcTimeout() { + public Map coprocessorService( + Class aClass, byte[] bytes, byte[] bytes1, Call call) throws Throwable { throw new UnsupportedOperationException(); } @Override - public void setWriteRpcTimeout(int i) { + public void coprocessorService( + Class aClass, byte[] bytes, byte[] bytes1, Call call, Callback callback) + throws Throwable { throw new UnsupportedOperationException(); } @Override - public void addOnCloseListener(Runnable listener) { - this.referenceCounter - .getOnLastReferenceClosed() - .addListener(listener, MoreExecutors.directExecutor()); + public long getWriteBufferSize() { + throw new UnsupportedOperationException(); } - private void scheduleSequentialReadOperationWithVerification( - final RequestResourcesDescription resultInfo, - final Supplier> secondaryGetFutureSupplier, - final FutureCallback verificationCallback) { - this.referenceCounter.holdReferenceUntilCompletion( - RequestScheduling.scheduleRequestAndVerificationWithFlowControl( - resultInfo, - secondaryGetFutureSupplier, - this.mirroringTracer.spanFactory.wrapReadVerificationCallback(verificationCallback), - this.flowController, - this.mirroringTracer)); + @Override + public void setWriteBufferSize(long l) throws IOException { + throw new UnsupportedOperationException(); } - private void scheduleSequentialWriteOperation( - final WriteOperationInfo writeOperationInfo, - final Supplier> secondaryResultFutureSupplier) { - final FlowController flowController = this.flowController; - WriteOperationFutureCallback writeErrorCallback = - new WriteOperationFutureCallback() { - @Override - public void onFailure(Throwable throwable) { - secondaryWriteErrorConsumer.consume( - writeOperationInfo.hBaseOperation, writeOperationInfo.operations, throwable); - } - }; - - this.referenceCounter.holdReferenceUntilCompletion( - RequestScheduling.scheduleRequestAndVerificationWithFlowControl( - writeOperationInfo.requestResourcesDescription, - secondaryResultFutureSupplier, - this.mirroringTracer.spanFactory.wrapWriteOperationCallback(writeErrorCallback), - flowController, - this.mirroringTracer, - new Function() { - @Override - public Void apply(Throwable throwable) { - secondaryWriteErrorConsumer.consume( - writeOperationInfo.hBaseOperation, writeOperationInfo.operations, throwable); - return null; - } - })); + @Override + public Map batchCoprocessorService( + MethodDescriptor methodDescriptor, Message message, byte[] bytes, byte[] bytes1, R r) + throws Throwable { + throw new UnsupportedOperationException(); } - private void batchSingleWriteOperation(Row operation) throws IOException { - Object[] results = new Object[1]; - try { - batchWithSpan(Collections.singletonList(operation), results); - } catch (RetriesExhaustedWithDetailsException e) { - Throwable exception = e.getCause(0); - if (exception instanceof IOException) { - throw (IOException) exception; - } - throw new IOException(exception); - } catch (InterruptedException e) { - InterruptedIOException interruptedIOException = new InterruptedIOException(); - interruptedIOException.initCause(e); - throw interruptedIOException; - } + @Override + public void batchCoprocessorService( + MethodDescriptor methodDescriptor, + Message message, + byte[] bytes, + byte[] bytes1, + R r, + Callback callback) + throws Throwable { + throw new UnsupportedOperationException(); } - private void batchWithSpan(final List inputOperations, final Object[] results) - throws IOException, InterruptedException { - batchWithSpan(inputOperations, results, null); + @Override + public void setOperationTimeout(int i) { + throw new UnsupportedOperationException(); } - private void batchWithSpan( - final List inputOperations, - final Object[] results, - @Nullable final Callback callback) - throws IOException, InterruptedException { - final List operations = new ArrayList<>(inputOperations); - Log.trace("[%s] batch(operations=%s, results)", this.getName(), operations); - - // We store batch results in a internal variable to prevent the user from modifying it when it - // might still be used by asynchronous secondary operation. - final Object[] internalPrimaryResults = new Object[results.length]; - - CallableThrowingIOAndInterruptedException primaryOperation = - new CallableThrowingIOAndInterruptedException() { - @Override - public Void call() throws IOException, InterruptedException { - if (callback == null) { - MirroringTable.this.primaryTable.batch(operations, internalPrimaryResults); - } else { - MirroringTable.this.primaryTable.batchCallback( - operations, internalPrimaryResults, callback); - } - return null; - } - }; - - try { - if (!this.performWritesConcurrently || !canBatchBePerformedConcurrently(operations)) { - sequentialBatch(internalPrimaryResults, operations, primaryOperation); - } else { - concurrentBatch(internalPrimaryResults, operations, primaryOperation); - } - } finally { - System.arraycopy(internalPrimaryResults, 0, results, 0, results.length); - } + @Override + public int getOperationTimeout() { + throw new UnsupportedOperationException(); } - private boolean canBatchBePerformedConcurrently(List operations) { - // Only Puts and Deletes can be performed concurrently. - // We assume that RowMutations can consist of only Puts and Deletes (which is true in HBase 1.x - // and 2.x). - for (Row operation : operations) { - if (!(operation instanceof Put) - && !(operation instanceof Delete) - && !(operation instanceof RowMutations)) { - return false; - } - } - return true; + @Override + public int getRpcTimeout() { + throw new UnsupportedOperationException(); } - private void sequentialBatch( - Object[] results, - List operations, - CallableThrowingIOAndInterruptedException primaryOperation) - throws IOException, InterruptedException { - try { - this.mirroringTracer.spanFactory.wrapPrimaryOperation(primaryOperation, HBaseOperation.BATCH); - } finally { - scheduleSecondaryWriteBatchOperations(operations, results); - } + @Override + public void setRpcTimeout(int i) { + throw new UnsupportedOperationException(); } - private void concurrentBatch( - final Object[] primaryResults, - final List operations, - final CallableThrowingIOAndInterruptedException primaryOperation) - throws IOException, InterruptedException { - RequestResourcesDescription requestResourcesDescription = - new RequestResourcesDescription(operations, new Result[0]); - final Object[] secondaryResults = new Object[operations.size()]; - final Throwable[] primaryException = new Throwable[1]; - final Throwable[] flowControllerException = new Throwable[1]; - - // After the flow control resources have been obtained, we will schedule secondary operation and - // then run primary operation. - final Supplier> invokeBothOperations = - new Supplier>() { - @Override - public ListenableFuture get() { - // We are scheduling secondary batch to run concurrently. - ListenableFuture secondaryOperationEnded = - secondaryAsyncWrapper.batch(operations, secondaryResults).get(); - // Primary operation is then performed synchronously. - try { - primaryOperation.call(); - } catch (IOException | InterruptedException e) { - primaryException[0] = e; - } - // Primary operation has ended and its results are available to the user. - - // We want the schedule verification to after the secondary operation. - return secondaryOperationEnded; - } - }; - - FutureCallback verification = - new FutureCallback() { - private void verify() { - int numResults = primaryResults.length; - for (int i = 0; i < numResults; i++) { - Object primaryResult = primaryResults[i]; - Object secondaryResult = secondaryResults[i]; - boolean primaryFailure = resultIsFaultyPredicate.apply(primaryResult); - boolean secondaryFailure = resultIsFaultyPredicate.apply(secondaryResult); - // Primary errors will be reported directly to the user. - if (secondaryFailure && !primaryFailure) { - Throwable exception = secondaryResult == null ? null : (Throwable) secondaryResult; - - secondaryWriteErrorConsumer.consume( - HBaseOperation.BATCH, operations.get(i), exception); - } - } - } - - @Override - public void onSuccess(@NullableDecl Void result) { - verify(); - } - - @Override - public void onFailure(Throwable throwable) { - verify(); - } - }; - - this.referenceCounter.holdReferenceUntilCompletion( - RequestScheduling.scheduleRequestAndVerificationWithFlowControl( - requestResourcesDescription, - invokeBothOperations, - verification, - this.flowController, - this.mirroringTracer, - new Function() { - @NullableDecl - @Override - public Void apply(@NullableDecl Throwable throwable) { - flowControllerException[0] = throwable; - return null; - } - })); - - if (flowControllerException[0] != null) { - throw new IOException("FlowController rejected the request", flowControllerException[0]); - } - - if (primaryException[0] != null) { - if (primaryException[0] instanceof InterruptedException) { - throw (InterruptedException) primaryException[0]; - } else { - throw (IOException) primaryException[0]; - } - } + @Override + public int getReadRpcTimeout() { + throw new UnsupportedOperationException(); } - // TODO: force increments and appends to return resutlts. - // TODO: we had a discussion about this code - it is not so readable and not optimal, maybe this - // code can be somehow simplified, for example, maybe we could split the batch in two parts - - // reads and writes. - private void scheduleSecondaryWriteBatchOperations( - final List operations, final Object[] results) { - - final FailedSuccessfulSplit failedSuccessfulSplit = - createOperationsSplit(operations, results); - - if (failedSuccessfulSplit.successfulOperations.size() == 0) { - return; - } - - List operationsToScheduleOnSecondary = - rewriteIncrementsAndAppendsAsPuts( - failedSuccessfulSplit.successfulOperations, failedSuccessfulSplit.successfulResults); - - final Object[] resultsSecondary = new Object[operationsToScheduleOnSecondary.size()]; - - // List of writes created by this call contains Puts instead of Increments and Appends and it - // can be passed to secondaryWriteErrorConsumer. - final ReadWriteSplit successfulReadWriteSplit = - new ReadWriteSplit<>( - failedSuccessfulSplit.successfulOperations, - failedSuccessfulSplit.successfulResults, - Result.class); - - FutureCallback verificationFuture = - BatchHelpers.createBatchVerificationCallback( - failedSuccessfulSplit, - successfulReadWriteSplit, - resultsSecondary, - verificationContinuationFactory.getMismatchDetector(), - this.secondaryWriteErrorConsumer, - resultIsFaultyPredicate, - this.mirroringTracer); - - RequestResourcesDescription requestResourcesDescription = - new RequestResourcesDescription( - operationsToScheduleOnSecondary, successfulReadWriteSplit.readResults); - - Function resourceReservationFailureCallback = - new Function() { - @Override - public Void apply(Throwable throwable) { - secondaryWriteErrorConsumer.consume( - HBaseOperation.BATCH, successfulReadWriteSplit.writeOperations, throwable); - return null; - } - }; - - this.referenceCounter.holdReferenceUntilCompletion( - RequestScheduling.scheduleRequestAndVerificationWithFlowControl( - requestResourcesDescription, - this.secondaryAsyncWrapper.batch(operationsToScheduleOnSecondary, resultsSecondary), - verificationFuture, - this.flowController, - this.mirroringTracer, - resourceReservationFailureCallback)); + @Override + public void setReadRpcTimeout(int i) { + throw new UnsupportedOperationException(); } - private FailedSuccessfulSplit createOperationsSplit( - List operations, Object[] results) { - boolean skipReads = !this.readSampler.shouldNextReadOperationBeSampled(); - if (skipReads) { - ReadWriteSplit readWriteSplit = new ReadWriteSplit<>(operations, results, Object.class); - return new FailedSuccessfulSplit<>( - readWriteSplit.writeOperations, readWriteSplit.writeResults, resultIsFaultyPredicate); - } - return new FailedSuccessfulSplit<>(operations, results, resultIsFaultyPredicate); + @Override + public int getWriteRpcTimeout() { + throw new UnsupportedOperationException(); } - private List rewriteIncrementsAndAppendsAsPuts( - List successfulOperations, Result[] successfulResults) { - List rewrittenRows = new ArrayList<>(); - for (int i = 0; i < successfulOperations.size(); i++) { - Row operation = successfulOperations.get(i); - if (operation instanceof Increment || operation instanceof Append) { - Result result = successfulResults[i]; - rewrittenRows.add(makePutFromResult(result)); - } else { - rewrittenRows.add(operation); - } - } - return rewrittenRows; + @Override + public void setWriteRpcTimeout(int i) { + throw new UnsupportedOperationException(); } - public static class WriteOperationInfo { - public final RequestResourcesDescription requestResourcesDescription; - public final List operations; - public final HBaseOperation hBaseOperation; - - public WriteOperationInfo(Put operation) { - this(new RequestResourcesDescription(operation), operation, HBaseOperation.PUT); - } - - public WriteOperationInfo(Delete operation) { - this(new RequestResourcesDescription(operation), operation, HBaseOperation.DELETE); - } - - public WriteOperationInfo(Append operation) { - this(new RequestResourcesDescription(operation), operation, HBaseOperation.APPEND); - } - - public WriteOperationInfo(Increment operation) { - this(new RequestResourcesDescription(operation), operation, HBaseOperation.INCREMENT); - } - - public WriteOperationInfo(RowMutations operation) { - this(new RequestResourcesDescription(operation), operation, HBaseOperation.MUTATE_ROW); - } - - private WriteOperationInfo( - RequestResourcesDescription requestResourcesDescription, - Row operation, - HBaseOperation hBaseOperation) { - this.requestResourcesDescription = requestResourcesDescription; - this.operations = Collections.singletonList(operation); - this.hBaseOperation = hBaseOperation; + /** + * Helper class that holds common parameters to {@link + * RequestScheduling#scheduleRequestWithCallback(RequestResourcesDescription, Supplier, + * FutureCallback, FlowController, MirroringTracer, Function)} for single instance of {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable}. + * + *

It also takes care of reference counting all scheduled operations. + */ + public static class RequestScheduler { + final FlowController flowController; + final MirroringTracer mirroringTracer; + final ReferenceCounter referenceCounter; + + public RequestScheduler( + FlowController flowController, + MirroringTracer mirroringTracer, + ReferenceCounter referenceCounter) { + this.flowController = flowController; + this.mirroringTracer = mirroringTracer; + this.referenceCounter = referenceCounter; + } + + public RequestScheduler withReferenceCounter(ReferenceCounter referenceCounter) { + return new RequestScheduler(this.flowController, this.mirroringTracer, referenceCounter); + } + + public ListenableFuture scheduleRequestWithCallback( + final RequestResourcesDescription requestResourcesDescription, + final Supplier> secondaryResultFutureSupplier, + final FutureCallback verificationCallback) { + return this.scheduleRequestWithCallback( + requestResourcesDescription, + secondaryResultFutureSupplier, + verificationCallback, + // noop flowControlReservationErrorConsumer + new Function() { + @Override + public Void apply(Throwable t) { + return null; + } + }); + } + + public ListenableFuture scheduleRequestWithCallback( + final RequestResourcesDescription requestResourcesDescription, + final Supplier> secondaryResultFutureSupplier, + final FutureCallback verificationCallback, + final Function flowControlReservationErrorConsumer) { + ListenableFuture future = + RequestScheduling.scheduleRequestWithCallback( + requestResourcesDescription, + secondaryResultFutureSupplier, + verificationCallback, + this.flowController, + this.mirroringTracer, + flowControlReservationErrorConsumer); + holdReferenceUntilCompletion(this.referenceCounter, future); + return future; } } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncResultScannerWrapper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncResultScannerWrapper.java index e351c3f378..fbc9b2f644 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncResultScannerWrapper.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncResultScannerWrapper.java @@ -18,14 +18,11 @@ import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringResultScanner; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableCloseable; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; import com.google.common.base.Supplier; import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.ListeningExecutorService; -import com.google.common.util.concurrent.MoreExecutors; import io.opencensus.common.Scope; import io.opencensus.trace.Span; import java.io.IOException; @@ -35,7 +32,6 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.Table; /** * {@link MirroringResultScanner} schedules asynchronous next()s after synchronous operations to @@ -45,8 +41,7 @@ *

Note that next() method returns a Supplier<> as its result is used only in callbacks */ @InternalApi("For internal usage only") -public class AsyncResultScannerWrapper implements ListenableCloseable { - private final Table table; +public class AsyncResultScannerWrapper { private final MirroringTracer mirroringTracer; /** * We use this queue to ensure that asynchronous next()s are called in the same order and with the @@ -54,61 +49,50 @@ public class AsyncResultScannerWrapper implements ListenableCloseable { */ private final ConcurrentLinkedQueue nextContextQueue; + /** + * We use this queue to ensure that asynchronous verification of scan results is called in the + * same order as next()s on primary result scanner. + */ + public final ConcurrentLinkedQueue nextResultQueue; + private final ResultScanner scanner; private final ListeningExecutorService executorService; private final AtomicBoolean closed = new AtomicBoolean(false); - /** - * We are counting references to this object to be able to call {@link ResultScanner#close()} on - * underlying scanner in a predictable way. The reference count is increased before submitting - * each asynchronous task, and decreased after it finishes. Moreover, this object holds an - * implicit self-reference, which in released in {@link #asyncClose()}. - * - *

In this way we are able to call ResultScanner#close() only if all scheduled tasks have - * finished and #asyncClose() was called. - */ - private ListenableReferenceCounter pendingOperationsReferenceCounter; public AsyncResultScannerWrapper( - Table table, ResultScanner scanner, ListeningExecutorService executorService, MirroringTracer mirroringTracer) { - super(); - this.table = table; this.scanner = scanner; this.mirroringTracer = mirroringTracer; - this.pendingOperationsReferenceCounter = new ListenableReferenceCounter(); this.executorService = executorService; this.nextContextQueue = new ConcurrentLinkedQueue<>(); + this.nextResultQueue = new ConcurrentLinkedQueue<>(); } - public Supplier> next( - final ScannerRequestContext context) { - return new Supplier>() { + public Supplier> next(final ScannerRequestContext context) { + return new Supplier>() { @Override - public ListenableFuture get() { - // TODO(mwalkiewicz): this is not locked - check if it is ok? + public ListenableFuture get() { nextContextQueue.add(context); - ListenableFuture future = scheduleNext(); - pendingOperationsReferenceCounter.holdReferenceUntilCompletion(future); - return future; + return scheduleNext(); } }; } - private ListenableFuture scheduleNext() { + private ListenableFuture scheduleNext() { return this.executorService.submit( - new Callable() { + new Callable() { @Override - public AsyncScannerVerificationPayload call() throws AsyncScannerExceptionWithContext { - // TODO: verify if lock on table is required or the lock on the scanner would be enough. - synchronized (AsyncResultScannerWrapper.this.table) { + public Void call() throws AsyncScannerExceptionWithContext { + synchronized (AsyncResultScannerWrapper.this) { final ScannerRequestContext requestContext = AsyncResultScannerWrapper.this.nextContextQueue.remove(); try (Scope scope = AsyncResultScannerWrapper.this.mirroringTracer.spanFactory.spanAsScope( requestContext.span)) { - return performNext(requestContext); + AsyncResultScannerWrapper.this.nextResultQueue.add(performNext(requestContext)); + return null; } } } @@ -156,52 +140,24 @@ public Result[] call() throws IOException { return new AsyncScannerVerificationPayload(requestContext, result); } - public ListenableFuture renewLease() { - return submitTask( - new Callable() { - @Override - public Boolean call() { - boolean result; - synchronized (table) { - result = scanner.renewLease(); - } - return result; - } - }); + /** + * This operation is thread-safe, but not asynchronous, because there is no need to invoke it + * asynchronously. + */ + public boolean renewLease() { + synchronized (this) { + return scanner.renewLease(); + } } - public ListenableFuture asyncClose() { + public void close() { if (this.closed.getAndSet(true)) { - return this.pendingOperationsReferenceCounter.getOnLastReferenceClosed(); + return; } - this.pendingOperationsReferenceCounter.decrementReferenceCount(); - this.pendingOperationsReferenceCounter - .getOnLastReferenceClosed() - .addListener( - new Runnable() { - @Override - public void run() { - synchronized (table) { - scanner.close(); - } - } - }, - MoreExecutors.directExecutor()); - return this.pendingOperationsReferenceCounter.getOnLastReferenceClosed(); - } - - public ListenableFuture submitTask(Callable task) { - ListenableFuture future = this.executorService.submit(task); - this.pendingOperationsReferenceCounter.holdReferenceUntilCompletion(future); - return future; - } - - @Override - public void addOnCloseListener(Runnable listener) { - this.pendingOperationsReferenceCounter - .getOnLastReferenceClosed() - .addListener(listener, MoreExecutors.directExecutor()); + synchronized (this) { + scanner.close(); + } } /** @@ -214,8 +170,6 @@ public static class ScannerRequestContext { public final Scan scan; /** Results of corresponding scan operation on primary ResultScanner. */ public final Result[] result; - /** Number of Results that were retrieved from this scanner before current request. */ - public final int startingIndex; /** Number of Results requested in current next call. */ public final int numRequests; /** Tracing Span will be used as a parent span of current request. */ @@ -228,27 +182,20 @@ public static class ScannerRequestContext { public final boolean singleNext; private ScannerRequestContext( - Scan scan, - Result[] result, - int startingIndex, - int numRequests, - boolean singleNext, - Span span) { + Scan scan, Result[] result, int numRequests, boolean singleNext, Span span) { this.scan = scan; this.result = result; - this.startingIndex = startingIndex; this.numRequests = numRequests; this.span = span; this.singleNext = singleNext; } - public ScannerRequestContext( - Scan scan, Result[] result, int startingIndex, int numRequests, Span span) { - this(scan, result, startingIndex, numRequests, false, span); + public ScannerRequestContext(Scan scan, Result[] result, int numRequests, Span span) { + this(scan, result, numRequests, false, span); } - public ScannerRequestContext(Scan scan, Result result, int startingIndex, Span span) { - this(scan, new Result[] {result}, startingIndex, 1, true, span); + public ScannerRequestContext(Scan scan, Result result, Span span) { + this(scan, new Result[] {result}, 1, true, span); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncTableWrapper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncTableWrapper.java index 6496ddbf9c..4347d516db 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncTableWrapper.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/AsyncTableWrapper.java @@ -18,16 +18,12 @@ import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOAndInterruptedException; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableCloseable; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; import com.google.common.base.Supplier; import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.ListeningExecutorService; -import com.google.common.util.concurrent.MoreExecutors; -import com.google.common.util.concurrent.SettableFuture; import java.io.IOException; import java.util.List; import java.util.concurrent.Callable; @@ -45,7 +41,7 @@ /** * MirroringClient verifies consistency between two databases asynchronously - after the results are - * delivered to the user. HBase Table object does not have an synchronous API, so we simulate it by + * delivered to the user. HBase Table object does not have an asynchronous API, so we simulate it by * wrapping the regular Table into AsyncTableWrapper. * *

Table instances are not thread-safe, every operation is synchronized to prevent concurrent @@ -55,23 +51,11 @@ * used in callbacks. */ @InternalApi("For internal usage only") -public class AsyncTableWrapper implements ListenableCloseable { +public class AsyncTableWrapper { private static final Logger Log = new Logger(AsyncTableWrapper.class); private final Table table; private final ListeningExecutorService executorService; private final MirroringTracer mirroringTracer; - /** - * We are counting references to this object to be able to call {@link Table#close()} on - * underlying table in a predictable way. The reference count is increased before submitting each - * asynchronous task or when creating a ResultScanner, and decreased after it finishes. Moreover, - * this object holds an implicit self-reference, which in released in {@link #asyncClose()}. - * - *

In this way we are able to call Table#close() only if all scheduled tasks have finished, all - * scanners are closed, and #asyncClose() was called. - */ - private final ListenableReferenceCounter pendingOperationsReferenceCounter; - - private final SettableFuture closeResultFuture = SettableFuture.create(); private final AtomicBoolean closed = new AtomicBoolean(false); public AsyncTableWrapper( @@ -79,7 +63,6 @@ public AsyncTableWrapper( this.table = table; this.executorService = executorService; this.mirroringTracer = mirroringTracer; - this.pendingOperationsReferenceCounter = new ListenableReferenceCounter(); } public Supplier> get(final Get gets) { @@ -130,52 +113,33 @@ public boolean[] call() throws IOException { HBaseOperation.EXISTS_ALL); } - public ListenableFuture asyncClose() { + public void close() throws IOException { if (this.closed.getAndSet(true)) { - return this.closeResultFuture; + return; } - this.pendingOperationsReferenceCounter.decrementReferenceCount(); - - this.pendingOperationsReferenceCounter - .getOnLastReferenceClosed() - .addListener( - this.mirroringTracer.spanFactory.wrapWithCurrentSpan( - new Runnable() { - @Override - public void run() { - try { - AsyncTableWrapper.this.mirroringTracer.spanFactory.wrapSecondaryOperation( - new CallableThrowingIOException() { - @Override - public Void call() throws IOException { - synchronized (table) { - Log.trace("performing close()"); - table.close(); - } - AsyncTableWrapper.this.closeResultFuture.set(null); - return null; - } - }, - HBaseOperation.TABLE_CLOSE); - } catch (IOException e) { - AsyncTableWrapper.this.closeResultFuture.setException(e); - } finally { - Log.trace("asyncClose() completed"); - } - } - }), - MoreExecutors.directExecutor()); - return this.closeResultFuture; + try { + this.mirroringTracer.spanFactory.wrapSecondaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + synchronized (table) { + Log.trace("performing close()"); + table.close(); + } + return null; + } + }, + HBaseOperation.TABLE_CLOSE); + } finally { + Log.trace("asyncClose() completed"); + } } public AsyncResultScannerWrapper getScanner(Scan scan) throws IOException { Log.trace("getScanner(Scan)"); - AsyncResultScannerWrapper result = - new AsyncResultScannerWrapper( - this.table, this.table.getScanner(scan), this.executorService, this.mirroringTracer); - this.pendingOperationsReferenceCounter.holdReferenceUntilClosing(result); - return result; + return new AsyncResultScannerWrapper( + this.table.getScanner(scan), this.executorService, this.mirroringTracer); } public Supplier> createSubmitTaskSupplier( @@ -209,9 +173,7 @@ public ListenableFuture get() { } public ListenableFuture submitTask(Callable task) { - ListenableFuture future = this.executorService.submit(task); - this.pendingOperationsReferenceCounter.holdReferenceUntilCompletion(future); - return future; + return this.executorService.submit(task); } public Supplier> put(final Put put) { @@ -292,11 +254,4 @@ public Void call() throws IOException, InterruptedException { }, HBaseOperation.BATCH); } - - @Override - public void addOnCloseListener(Runnable listener) { - this.pendingOperationsReferenceCounter - .getOnLastReferenceClosed() - .addListener(listener, MoreExecutors.directExecutor()); - } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/ConcurrentMirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/ConcurrentMirroringBufferedMutator.java new file mode 100644 index 0000000000..1f690c5727 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/ConcurrentMirroringBufferedMutator.java @@ -0,0 +1,386 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator; + +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConfiguration; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException.ExceptionDetails; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.common.collect.Iterables; +import com.google.common.collect.MapMaker; +import com.google.common.util.concurrent.FutureCallback; +import com.google.common.util.concurrent.Futures; +import com.google.common.util.concurrent.ListenableFuture; +import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Deque; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.LinkedBlockingDeque; +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.hadoop.hbase.client.BufferedMutatorParams; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.client.Row; +import org.checkerframework.checker.nullness.compatqual.NullableDecl; + +/** + * {@link MirroringBufferedMutator} implementation that performs writes to primary and secondary + * database concurrently. + * + *

Similarly to {@link SequentialMirroringBufferedMutator}, after at least {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper#MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH} + * bytes of mutations issued, a asynchronous flush is performed. This implementation performs + * flushes on both primary and secondary databases. Errors reported by underlying buffered mutators + * are reported to the user after those flushes complete and are annotated with {@link + * MirroringOperationException} to denote which database have failed. + */ +public class ConcurrentMirroringBufferedMutator + extends MirroringBufferedMutator> { + // These maps use WeakReferences as keys, however we store mutations that might end up in this map + // in a buffer before they are scheduled on secondary buffered mutator, and we remove them after + // the secondary was flushed. Elements are inserted into this map between these steps. + // These maps are safe to be used concurrently. + private final Map failedPrimaryOperations = + (new MapMaker()).weakKeys().makeMap(); + private final Map failedSecondaryOperations = + (new MapMaker()).weakKeys().makeMap(); + + private final Deque flushExceptions = new LinkedBlockingDeque<>(); + + private final RetriesExhaustedExceptionBuilder retriesExhaustedExceptionBuilder = + new RetriesExhaustedExceptionBuilder(); + + public ConcurrentMirroringBufferedMutator( + Connection primaryConnection, + Connection secondaryConnection, + BufferedMutatorParams bufferedMutatorParams, + MirroringConfiguration configuration, + ExecutorService executorService, + ReferenceCounter connectionReferenceCounter, + Timestamper timestamper, + MirroringTracer mirroringTracer) + throws IOException { + super( + primaryConnection, + secondaryConnection, + bufferedMutatorParams, + configuration, + executorService, + connectionReferenceCounter, + timestamper, + mirroringTracer); + } + + @Override + protected void mutateScoped(final List mutations) throws IOException { + final MirroringExceptionBuilder mirroringExceptionBuilder = + new MirroringExceptionBuilder<>(); + + RequestResourcesDescription resourcesDescription = new RequestResourcesDescription(mutations); + primaryMutate(mutations, mirroringExceptionBuilder); + secondaryMutate(mutations, mirroringExceptionBuilder); + storeResourcesAndFlushIfNeeded(mutations, resourcesDescription); + mirroringExceptionBuilder.throwCombinedExceptionIfPresent(); + throwExceptionIfAvailable(); + } + + private void primaryMutate( + final List mutations, + MirroringExceptionBuilder mirroringExceptionBuilder) { + try { + this.mirroringTracer.spanFactory.wrapPrimaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + primaryBufferedMutator.mutate(mutations); + return null; + } + }, + HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST); + } catch (RetriesExhaustedWithDetailsException e) { + // Ignore this error, it was already handled by error handler and we will rethrow it after + // flush. + } catch (IOException e) { + mirroringExceptionBuilder.setPrimaryException(e); + } + } + + private void secondaryMutate( + final List mutations, + MirroringExceptionBuilder mirroringExceptionBuilder) { + try { + this.mirroringTracer.spanFactory.wrapSecondaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + secondaryBufferedMutator.mutate(mutations); + return null; + } + }, + HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST); + } catch (RetriesExhaustedWithDetailsException e) { + // Ignore this error, it was already handled by error handler and we will rethrow it after + // flush. + } catch (IOException e) { + mirroringExceptionBuilder.setSecondaryException(e); + } + } + + @Override + protected void handlePrimaryException(RetriesExhaustedWithDetailsException e) { + for (int i = 0; i < e.getNumExceptions(); i++) { + failedPrimaryOperations.put( + e.getRow(i), new ExceptionDetails(e.getCause(i), e.getHostnamePort(i))); + } + } + + @Override + protected void handleSecondaryException(RetriesExhaustedWithDetailsException e) { + for (int i = 0; i < e.getNumExceptions(); i++) { + failedSecondaryOperations.put( + e.getRow(i), new ExceptionDetails(e.getCause(i), e.getHostnamePort(i))); + } + } + + @Override + protected FlushFutures scheduleFlushScoped( + final List> dataToFlush, FlushFutures previousFlushFutures) { + final SettableFuture bothFlushesFinished = SettableFuture.create(); + + ListenableFuture primaryFlushFinished = + schedulePrimaryFlush(previousFlushFutures.primaryFlushFinished); + ListenableFuture secondaryFlushFinished = + scheduleSecondaryFlush(previousFlushFutures.secondaryFlushFinished); + + // This object will aggregate `IOExceptions` and `RuntimeExceptions` thrown by flush() + // operations which were not handled by registered handlers (that is + // `RetriesExhaustedWithDetailsException`s). Those exceptions will be tagged with + // MirroringOperationException and rethrown to the user. + final MirroringExceptionBuilder mirroringExceptionBuilder = + new MirroringExceptionBuilder<>(); + + final AtomicBoolean firstFinished = new AtomicBoolean(false); + final Runnable flushFinished = + new Runnable() { + @Override + public void run() { + if (firstFinished.getAndSet(true)) { + bothFlushesFinishedCallback(dataToFlush); + Throwable flushException = mirroringExceptionBuilder.buildCombinedException(); + if (flushException != null) { + flushExceptions.add(flushException); + } + bothFlushesFinished.set(null); + } + } + }; + + Futures.addCallback( + primaryFlushFinished, + this.mirroringTracer.spanFactory.wrapWithCurrentSpan( + new FutureCallback() { + @Override + public void onSuccess(@NullableDecl Void aVoid) { + flushFinished.run(); + } + + @Override + public void onFailure(Throwable throwable) { + // RetriesExhaustedWithDetailsException is ignored, it was reported to the handler + // and stored in failedPrimaryOperations buffer. + if (!(throwable instanceof RetriesExhaustedWithDetailsException)) { + mirroringExceptionBuilder.setPrimaryException(throwable); + } + flushFinished.run(); + } + }), + MoreExecutors.directExecutor()); + + Futures.addCallback( + secondaryFlushFinished, + this.mirroringTracer.spanFactory.wrapWithCurrentSpan( + new FutureCallback() { + @Override + public void onSuccess(@NullableDecl Void aVoid) { + flushFinished.run(); + } + + @Override + public void onFailure(Throwable throwable) { + // RetriesExhaustedWithDetailsException is ignored, it was reported to the handler + // and stored in failedSecondaryOperations buffer. + if (!(throwable instanceof RetriesExhaustedWithDetailsException)) { + mirroringExceptionBuilder.setSecondaryException(throwable); + } + flushFinished.run(); + } + }), + MoreExecutors.directExecutor()); + + return new FlushFutures( + primaryFlushFinished, + secondaryFlushFinished, + bothFlushesFinished, + // Flush operation can be unblocked when both flushes have finished. + bothFlushesFinished); + } + + private ListenableFuture scheduleSecondaryFlush( + final ListenableFuture previousFlushCompletedFuture) { + return this.executorService.submit( + this.mirroringTracer.spanFactory.wrapWithCurrentSpan( + new Callable() { + @Override + public Void call() throws Exception { + mirroringTracer.spanFactory.wrapSecondaryOperation( + createFlushTask(secondaryBufferedMutator, previousFlushCompletedFuture), + HBaseOperation.BUFFERED_MUTATOR_FLUSH); + return null; + } + })); + } + + private void bothFlushesFinishedCallback(List> dataToFlush) { + Iterable mutations = Iterables.concat(dataToFlush); + for (Mutation mutation : mutations) { + ExceptionDetails primaryCause = failedPrimaryOperations.remove(mutation); + ExceptionDetails secondaryCause = failedSecondaryOperations.remove(mutation); + boolean primaryFailed = primaryCause != null; + boolean secondaryFailed = secondaryCause != null; + + if (primaryFailed || secondaryFailed) { + Throwable exception; + String hostnamePort; + if (primaryFailed && secondaryFailed) { + exception = + MirroringOperationException.markedAsBothException( + primaryCause.exception, secondaryCause, mutation); + hostnamePort = primaryCause.hostnameAndPort; + } else if (primaryFailed) { + exception = + MirroringOperationException.markedAsPrimaryException( + primaryCause.exception, mutation); + hostnamePort = primaryCause.hostnameAndPort; + } else { + exception = + MirroringOperationException.markedAsSecondaryException( + secondaryCause.exception, mutation); + hostnamePort = secondaryCause.hostnameAndPort; + } + retriesExhaustedExceptionBuilder.addException(mutation, exception, hostnamePort); + } + } + } + + /** + * Throws exceptions that were reported by last flush operation or RetiresExhaustedException for + * rows that were reported as failed. + */ + @Override + protected void throwExceptionIfAvailable() throws IOException { + throwFlushExceptionIfAvailable(); + RetriesExhaustedWithDetailsException e = retriesExhaustedExceptionBuilder.clearAndBuild(); + if (e != null) { + this.userListener.onException(e, this); + } + } + + private void throwFlushExceptionIfAvailable() throws IOException { + Throwable error = this.flushExceptions.pollFirst(); + if (error == null) { + return; + } + if (error instanceof IOException) { + throw (IOException) error; + } else if (error instanceof RuntimeException) { + throw (RuntimeException) error; + } else { + throw new RuntimeException(error); + } + } + + private static class RetriesExhaustedExceptionBuilder { + private List mutations = new ArrayList<>(); + private List exceptions = new ArrayList<>(); + private List hostnamePorts = new ArrayList<>(); + + public synchronized void addException( + Mutation mutation, Throwable exception, String hostnamePort) { + this.mutations.add(mutation); + this.hostnamePorts.add(hostnamePort); + this.exceptions.add(exception); + } + + public synchronized RetriesExhaustedWithDetailsException clearAndBuild() { + if (this.mutations.isEmpty()) { + return null; + } + + List mutations = this.mutations; + List exceptions = this.exceptions; + List hostnamePorts = this.hostnamePorts; + this.mutations = new ArrayList<>(); + this.exceptions = new ArrayList<>(); + this.hostnamePorts = new ArrayList<>(); + return new RetriesExhaustedWithDetailsException(exceptions, mutations, hostnamePorts); + } + } + + private static class MirroringExceptionBuilder { + private E primaryException; + private E secondaryException; + + public void setPrimaryException(E primaryException) { + this.primaryException = primaryException; + } + + public void setSecondaryException(E secondaryException) { + this.secondaryException = secondaryException; + } + + public E buildCombinedException() { + if (this.primaryException != null && this.secondaryException != null) { + return MirroringOperationException.markedAsBothException( + this.primaryException, new ExceptionDetails(this.secondaryException), null); + } else if (this.primaryException != null) { + return MirroringOperationException.markedAsPrimaryException(this.primaryException, null); + } else if (this.secondaryException != null) { + return MirroringOperationException.markedAsSecondaryException( + this.secondaryException, null); + } else { + return null; + } + } + + public void throwCombinedExceptionIfPresent() throws E { + E exception = this.buildCombinedException(); + if (exception != null) { + throw exception; + } + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutator.java new file mode 100644 index 0000000000..a4b8fa203c --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutator.java @@ -0,0 +1,620 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounterUtils.holdReferenceUntilCompletion; + +import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConfiguration; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.AccumulatedExceptions; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.HierarchicalReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.common.util.concurrent.ListenableFuture; +import com.google.common.util.concurrent.ListeningExecutorService; +import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; +import io.opencensus.common.Scope; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.BufferedMutator; +import org.apache.hadoop.hbase.client.BufferedMutatorParams; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; + +/** + * Base class for {@code BufferedMutator}s that mirrors writes performed on first database to + * secondary database. + * + *

Consult {@link SequentialMirroringBufferedMutator} and {@link + * ConcurrentMirroringBufferedMutator} for available mirroring strategies. + * + *

This base class handles tracing, management of internal mutations buffer and starting + * asynchronous flushes. + * + *

Sharing code by inheritance was the cleanest approach we could come up with. + */ +@InternalApi("For internal usage only") +public abstract class MirroringBufferedMutator implements BufferedMutator { + + private final SettableFuture closedFuture = SettableFuture.create(); + private final Timestamper timestamper; + + public static BufferedMutator create( + boolean concurrent, + Connection primaryConnection, + Connection secondaryConnection, + BufferedMutatorParams bufferedMutatorParams, + MirroringConfiguration configuration, + FlowController flowController, + ExecutorService executorService, + SecondaryWriteErrorConsumer secondaryWriteErrorConsumer, + ReferenceCounter connectionReferenceCounter, + Timestamper timestamper, + MirroringTracer mirroringTracer) + throws IOException { + if (concurrent) { + return new ConcurrentMirroringBufferedMutator( + primaryConnection, + secondaryConnection, + bufferedMutatorParams, + configuration, + executorService, + connectionReferenceCounter, + timestamper, + mirroringTracer); + } else { + return new SequentialMirroringBufferedMutator( + primaryConnection, + secondaryConnection, + bufferedMutatorParams, + configuration, + flowController, + executorService, + secondaryWriteErrorConsumer, + connectionReferenceCounter, + timestamper, + mirroringTracer); + } + } + + protected final BufferedMutator primaryBufferedMutator; + protected final BufferedMutator secondaryBufferedMutator; + protected final ListeningExecutorService executorService; + protected final MirroringTracer mirroringTracer; + + /** Configuration that was used to configure this instance. */ + protected final MirroringConfiguration configuration; + /** Parameters that were used to create this instance. */ + private final BufferedMutatorParams bufferedMutatorParams; + /** + * Size that mutations kept in {@link FlushSerializer#mutationEntries} should reach to invoke a + * asynchronous flush() on the primary database. + */ + protected final long mutationsBufferFlushThresholdBytes; + + private final FlushSerializer flushSerializer; + + /** ExceptionListener supplied by the user. */ + protected final ExceptionListener userListener; + + private final AtomicBoolean closed = new AtomicBoolean(false); + private final HierarchicalReferenceCounter referenceCounter; + + public MirroringBufferedMutator( + Connection primaryConnection, + Connection secondaryConnection, + BufferedMutatorParams bufferedMutatorParams, + MirroringConfiguration configuration, + ExecutorService executorService, + ReferenceCounter connectionReferenceCounter, + Timestamper timestamper, + MirroringTracer mirroringTracer) + throws IOException { + this.userListener = bufferedMutatorParams.getListener(); + + // Our primary exception listeners do not throw exception but might call user-supplied handler + // which might throw. All exceptions thrown by that handler are rethrown to the user in places + // where they expect it. + ExceptionListener primaryErrorsListener = + new ExceptionListener() { + @Override + public void onException( + RetriesExhaustedWithDetailsException e, BufferedMutator bufferedMutator) + throws RetriesExhaustedWithDetailsException { + handlePrimaryException(e); + } + }; + + ExceptionListener secondaryErrorsListener = + new ExceptionListener() { + @Override + public void onException( + RetriesExhaustedWithDetailsException e, BufferedMutator bufferedMutator) { + handleSecondaryException(e); + } + }; + + this.primaryBufferedMutator = + primaryConnection.getBufferedMutator( + createBufferedMutatorParamsWithListener(bufferedMutatorParams, primaryErrorsListener)); + this.secondaryBufferedMutator = + secondaryConnection.getBufferedMutator( + createBufferedMutatorParamsWithListener( + bufferedMutatorParams, secondaryErrorsListener)); + this.mutationsBufferFlushThresholdBytes = + configuration.mirroringOptions.bufferedMutatorBytesToFlush; + this.executorService = MoreExecutors.listeningDecorator(executorService); + this.configuration = configuration; + this.bufferedMutatorParams = bufferedMutatorParams; + + this.mirroringTracer = mirroringTracer; + this.flushSerializer = new FlushSerializer(); + this.timestamper = timestamper; + + this.referenceCounter = new HierarchicalReferenceCounter(connectionReferenceCounter); + } + + @Override + public void mutate(Mutation mutation) throws IOException { + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BUFFERED_MUTATOR_MUTATE)) { + mutation = timestamper.fillTimestamp(mutation); + mutateScoped(Collections.singletonList(mutation)); + } + } + + @Override + public void mutate(final List list) throws IOException { + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope( + HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST)) { + List timestampedList = timestamper.fillTimestamp(list); + mutateScoped(timestampedList); + } + } + + protected abstract void mutateScoped(final List list) throws IOException; + + @Override + public void flush() throws IOException { + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BUFFERED_MUTATOR_FLUSH)) { + try { + // Wait until flush has finished. + scheduleFlush().flushOperationCanContinueFuture.get(); + } catch (InterruptedException | ExecutionException e) { + setInterruptedFlagIfInterruptedException(e); + throw new IOException(e); + } + // If the #flush() above has thrown an exception, it will be propagated to the user now. + // Otherwise we might still have an exception from an asynchronous #flush() to propagate. + + // If the flush operation started in this method throws, we guarantee that the exception will + // be propagated to the user. The rationale depends on whether we use the synchronous or + // concurrent implementation. + // Synchronous case: + // flushOperationCanContinueFuture is completed after primaryFlushErrorsReported is set, + // which happens after storing errors in the exceptionsToBeReportedToTheUser. + // Concurrent case: + // flushOperationCanContinueFuture is completed after bothFlushesFinished is set, + // which happens after storing errors from both primary and secondary flushes in the + // mirroringExceptionBuilder. + throwExceptionIfAvailable(); + } + } + + protected abstract void throwExceptionIfAvailable() throws IOException; + + /** + * Schedules asynchronous flushes of both buffered mutators (either sequentially or concurrently). + * + * @param dataToFlush List of entries that are were accumulated since last flush and should be + * flushed now. + * @param previousFlushFutures Futures that will be completed when previously scheduled flush will + * finish. Used to serialize asynchronous flushes. + * @return a pack of Futures that complete in various stages of flush operation. + */ + protected abstract FlushFutures scheduleFlushScoped( + List dataToFlush, FlushFutures previousFlushFutures); + + abstract void handlePrimaryException(RetriesExhaustedWithDetailsException e) + throws RetriesExhaustedWithDetailsException; + + abstract void handleSecondaryException(RetriesExhaustedWithDetailsException e); + + protected final void storeResourcesAndFlushIfNeeded( + BufferEntryType entry, RequestResourcesDescription resourcesDescription) { + this.flushSerializer.storeResourcesAndFlushIfThresholdIsExceeded(entry, resourcesDescription); + } + + protected final FlushFutures scheduleFlush() { + return this.flushSerializer.scheduleFlush(); + } + + private void flushBufferedMutatorBeforeClosing() + throws ExecutionException, InterruptedException, TimeoutException { + scheduleFlush() + .flushOperationCanContinueFuture + .get( + this.configuration.mirroringOptions.connectionTerminationTimeoutMillis, + TimeUnit.MILLISECONDS); + } + + @Override + public final void close() throws IOException { + try (Scope scope = + this.mirroringTracer.spanFactory.operationScope(HBaseOperation.BUFFERED_MUTATOR_CLOSE)) { + if (this.closed.getAndSet(true)) { + this.mirroringTracer + .spanFactory + .getCurrentSpan() + .addAnnotation("MirroringBufferedMutator closed more than once."); + return; + } + + final AccumulatedExceptions exceptions = new AccumulatedExceptions(); + + try { + // Schedule flush of all buffered data and: + // sequential) wait for primary flush to finish; + // concurrent) wait for both flushes to finish. + flushBufferedMutatorBeforeClosing(); + } catch (InterruptedException | ExecutionException | TimeoutException e) { + setInterruptedFlagIfInterruptedException(e); + exceptions.add(new IOException(e)); + } + + // Close the primary buffered mutator, if is flushed in both cases. + try { + closePrimaryBufferedMutator(); + } catch (IOException e) { + exceptions.add(e); + } + + // We are freeing the initial reference to current level reference counter. + referenceCounter.current.decrementReferenceCount(); + // But we are scheduling asynchronous secondary operation and we should increment our parent's + // ref counter until this operation is finished. + holdReferenceUntilCompletion(this.referenceCounter.parent, this.closedFuture); + + try { + // Schedule closing secondary buffered mutator. + // sequential) it will be called at some point in the future. + // concurrent) it will be called immediately because all asynchronous operations are already + // finished and directExecutor will call it in this thread. + referenceCounter + .current + .getOnLastReferenceClosed() + .addListener( + new Runnable() { + @Override + public void run() { + try { + closeSecondaryBufferedMutator(); + closedFuture.set(null); + } catch (IOException e) { + closedFuture.setException(e); + } + } + }, + MoreExecutors.directExecutor()); + } catch (RuntimeException e) { + exceptions.add(e); + } + + exceptions.rethrowIfCaptured(); + // Throw exceptions from async operations, if any. + // synchronous) all exceptions from primary operation were reported, because + // flushBufferedMutatorBeforeClosing waits for errors to be reported after primary operation + // finishes; + // concurrent) all exceptions are reported because flushBufferedMutatorBeforeClosing waits for + // both flushes to finish and report errors. + throwExceptionIfAvailable(); + } + } + + private void closePrimaryBufferedMutator() throws IOException { + this.mirroringTracer.spanFactory.wrapPrimaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + MirroringBufferedMutator.this.primaryBufferedMutator.close(); + return null; + } + }, + HBaseOperation.BUFFERED_MUTATOR_CLOSE); + } + + private void closeSecondaryBufferedMutator() throws IOException { + mirroringTracer.spanFactory.wrapSecondaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + MirroringBufferedMutator.this.secondaryBufferedMutator.close(); + return null; + } + }, + HBaseOperation.BUFFERED_MUTATOR_CLOSE); + } + + @Override + public long getWriteBufferSize() { + return this.bufferedMutatorParams.getWriteBufferSize(); + } + + @Override + public TableName getName() { + return this.bufferedMutatorParams.getTableName(); + } + + @Override + public Configuration getConfiguration() { + return this.configuration.baseConfiguration; + } + + protected final ListenableFuture schedulePrimaryFlush( + final ListenableFuture previousFlushCompletedFuture) { + return this.executorService.submit( + this.mirroringTracer.spanFactory.wrapWithCurrentSpan( + new Callable() { + @Override + public Void call() throws Exception { + mirroringTracer.spanFactory.wrapPrimaryOperation( + createFlushTask(primaryBufferedMutator, previousFlushCompletedFuture), + HBaseOperation.BUFFERED_MUTATOR_FLUSH); + return null; + } + })); + } + + protected final CallableThrowingIOException createFlushTask( + final BufferedMutator bufferedMutator, + final ListenableFuture previousFlushCompletedFuture) { + return new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + try { + previousFlushCompletedFuture.get(); + } catch (InterruptedException | ExecutionException ignored) { + // We do not care about errors, just if the previous flush is over. + } + bufferedMutator.flush(); + return null; + } + }; + } + + protected final void setInterruptedFlagIfInterruptedException(Exception e) { + if (e instanceof InterruptedException) { + Thread.currentThread().interrupt(); + } + } + + /** + * Create a new instance of {@link BufferedMutatorParams} based on supplied parameters but with + * replaced listener. Objects created by this method can be safely used for creating underlying + * buffered mutator. + */ + private static BufferedMutatorParams createBufferedMutatorParamsWithListener( + BufferedMutatorParams bufferedMutatorParams, ExceptionListener exceptionListener) { + BufferedMutatorParams params = new BufferedMutatorParams(bufferedMutatorParams.getTableName()); + params.writeBufferSize(bufferedMutatorParams.getWriteBufferSize()); + params.pool(bufferedMutatorParams.getPool()); + params.maxKeyValueSize(bufferedMutatorParams.getMaxKeyValueSize()); + params.listener(exceptionListener); + return params; + } + + protected static class FlushFutures { + + /** + * Future completed when the primary operation is finished. Used to sequence asynchronous + * flushes of the primary buffered mutator. + */ + public final ListenableFuture primaryFlushFinished; + + /** + * Future completed when the secondary operation is finished. Used to sequence asynchronous + * flushes of the secondary buffered mutator. + */ + public final ListenableFuture secondaryFlushFinished; + + /** + * Future completed when both asynchronous flush operations are finished. Used in {@link + * ConcurrentMirroringBufferedMutator#close()} method. + */ + public final ListenableFuture bothFlushesFinished; + + /** + * Future completed when an implementation decides that the {@link BufferedMutator#flush()} + * operation performed by the user can unblock. If the asynchronous flush operation throws an + * exception, the implementation should make sure the exception will be correctly read by {@link + * #throwExceptionIfAvailable()} method, which is called immediately after the completion of + * this future. + */ + public final ListenableFuture flushOperationCanContinueFuture; + + public FlushFutures( + ListenableFuture primaryFlushFinished, + ListenableFuture secondaryFlushFinished, + ListenableFuture bothFlushesFinished, + ListenableFuture flushOperationCanContinueFuture) { + this.primaryFlushFinished = primaryFlushFinished; + this.secondaryFlushFinished = secondaryFlushFinished; + this.bothFlushesFinished = bothFlushesFinished; + this.flushOperationCanContinueFuture = flushOperationCanContinueFuture; + } + } + + /** + * Helper class that manager performing asynchronous flush operations and correctly ordering them. + * + *

This is a non-static inner class with only a single instance per MirroringBufferedMutator + * instance, but it is not inlined to facilitate correct synchronization of scheduling flushes and + * to logically separate flush scheduling from other concerns. + * + *

Thread-safe. + */ + class FlushSerializer { + + /** + * Internal buffer that should keep mutations that were not yet flushed asynchronously. Type of + * the entry is specified by subclasses and can contain more elements than just mutations, e.g. + * related resource reservations. + * + *

{@link #storeResourcesAndFlushIfThresholdIsExceeded} relies on the fact that access to + * this field is synchronized. + * + *

{@link BufferedMutations} is not thread safe and usage of this field should be + * synchronized on current instance of {@link FlushSerializer}. + */ + private final BufferedMutations mutationEntries; + + /** + * We have to ensure that order of asynchronously called {@link BufferedMutator#flush()} is the + * same as order in which callbacks for these operations were created. To enforce this property + * each scheduled flush will wait for previously scheduled flush to finish before performing its + * operation. We are storing futures of last scheduled flush operation in this field. + * + *

Because each scheduled flush has to wait for previously scheduled flush we are implicitly + * creating a chain of flushes to be performed. Length of this chain is limited by the + * FlowController - once there are not more resources to be used asynchronously scheduling of + * new operations will block. + * + *

Access to {@code lastFlushFutures} field should be synchronized on current instance of + * {@link FlushSerializer}. + * + *

We have to ensure the ordering to prevent the following scenario: + * + *

    + *
  1. main thread: user calls mutate([1,2,3]) + *
  2. main thread: scheduleFlush with dataToFlush = [1,2,3] (flush1) because threshold is + * exceeded. + *
  3. main thread: user call flush() + *
  4. main thread: scheduleFlush with dataToFlush = [] (flush2). + *
  5. main thread: waits for flush2 to finish. + *
  6. worker thread 1: performs flush1 and blocks, underlying buffered mutator flushes + * [1,2,3]. + *
  7. worker thread 2: performs flush2 - there is nothing more to flush, call finishes + * immediately (there is not guarantee that this call would wait for flush1 to finish). + *
  8. main thread: continues running, but flush1 is still in progress. + *
+ * + *

Ensuring the order of flushes forces flush2 to be run after flush1 is finished. + */ + private FlushFutures lastFlushFutures = createCompletedFlushFutures(); + + public FlushSerializer() { + this.mutationEntries = new BufferedMutations<>(); + } + + private FlushFutures createCompletedFlushFutures() { + SettableFuture future = SettableFuture.create(); + future.set(null); + return new FlushFutures(future, future, future, future); + } + + public final synchronized FlushFutures scheduleFlush() { + // This method is synchronized to make sure that order of scheduled flushes matches order of + // created dataToFlush lists. + List dataToFlush = this.mutationEntries.getAndReset(); + return scheduleFlush(dataToFlush); + } + + public final synchronized void storeResourcesAndFlushIfThresholdIsExceeded( + BufferEntryType entry, RequestResourcesDescription resourcesDescription) { + // This method is synchronized to make sure that order of scheduled flushes matches order of + // created dataToFlush lists. + this.mutationEntries.add(entry, resourcesDescription.sizeInBytes); + if (this.mutationEntries.getMutationsBufferSizeBytes() > mutationsBufferFlushThresholdBytes) { + scheduleFlush(this.mutationEntries.getAndReset()); + } + } + + private synchronized FlushFutures scheduleFlush(List dataToFlush) { + try (Scope scope = mirroringTracer.spanFactory.scheduleFlushScope()) { + referenceCounter.incrementReferenceCount(); + + FlushFutures resultFutures = scheduleFlushScoped(dataToFlush, lastFlushFutures); + this.lastFlushFutures = resultFutures; + + resultFutures.secondaryFlushFinished.addListener( + new Runnable() { + @Override + public void run() { + referenceCounter.decrementReferenceCount(); + } + }, + MoreExecutors.directExecutor()); + return resultFutures; + } + } + } + + /** + * A container for mutations that were issued to primary buffered mutator. Generic EntryType can + * be used to store additional data with mutations (sequential buffered mutator uses it to keep + * FlowController reservations). + * + *

Keeps track of total size of buffered mutations and detects if there are enough entries to + * perform a flush. + * + *

not thread-safe, should be synchronized externally. + */ + private static class BufferedMutations { + private List mutationEntries; + private long mutationsBufferSizeBytes; + + private BufferedMutations() { + this.mutationEntries = new ArrayList<>(); + this.mutationsBufferSizeBytes = 0; + } + + public void add(EntryType entry, long sizeInBytes) { + this.mutationEntries.add(entry); + this.mutationsBufferSizeBytes += sizeInBytes; + } + + public long getMutationsBufferSizeBytes() { + return this.mutationsBufferSizeBytes; + } + + public List getAndReset() { + List returnValue = this.mutationEntries; + this.mutationEntries = new ArrayList<>(); + this.mutationsBufferSizeBytes = 0; + return returnValue; + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/SequentialMirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/SequentialMirroringBufferedMutator.java new file mode 100644 index 0000000000..0d650cc18c --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/SequentialMirroringBufferedMutator.java @@ -0,0 +1,528 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator; + +import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConfiguration; +import com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.SequentialMirroringBufferedMutator.Entry; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.AccumulatedExceptions; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.CallableThrowingIOException; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.common.collect.MapMaker; +import com.google.common.util.concurrent.FutureCallback; +import com.google.common.util.concurrent.Futures; +import com.google.common.util.concurrent.ListenableFuture; +import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; +import io.opencensus.common.Scope; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; +import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import org.apache.hadoop.hbase.client.BufferedMutator; +import org.apache.hadoop.hbase.client.BufferedMutatorParams; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.client.Row; +import org.checkerframework.checker.nullness.compatqual.NullableDecl; + +/** + * {@link MirroringBufferedMutator} implementation that performs mutations on secondary database + * only if we are certain that they were successfully applied on primary database. + * + *

The HBase 1.x API doesn't give its user any indication when asynchronous writes were + * performed, only performing a synchronous {@link BufferedMutator#flush()} ensures that all + * previously buffered mutations are done. To achieve our goal we store a copy of all mutations sent + * to primary BufferedMutator in a internal buffer. When size of the buffer reaches a threshold of + * {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper#MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH} + * bytes, we perform a flush in a worker thread. After flush we pass collected mutations to + * secondary BufferedMutator and flush it. Writes that have failed on primary are not forwarded to + * secondary, writes that have failed on secondary are forwarded to {@link + * SecondaryWriteErrorConsumer#consume(HBaseOperation, Row, Throwable)} handler. + * + *

Moreover, we perform our custom flow control to prevent unbounded growth of memory - calls to + * mutate() might block if secondary database lags behind. We account size of all operations that + * were placed in primary BufferedMutator but weren't yet executed and confirmed on secondary + * BufferedMutator (or until we are informed that they have failed on primary). + * + *

Notes about error handling in SequentialMirroringBufferedMutator: + * + *

The HBase 1.x BufferedMutator's API notifies the user about failed mutations by calling user + * supplied exception handler with appropriate {@link RetriesExhaustedWithDetailsException} + * exception as a parameter. The error handlers are not called asynchronously from BufferedMutator's + * internal thread - instead BufferedMutator implementations gather encountered failed mutations and + * reasons of their failures in an internal data structure. The exception handler is called when the + * user code interacts with the BufferedMutator again - when {@link + * BufferedMutator#mutate(Mutation)}, {@link BufferedMutator#flush()} or {@link + * BufferedMutator#close()} is called (HBase 1.x BufferedMutator's implementation calls the + * exception handler performing the operation requested by the user). + * + *

Because the exception handlers are called synchronously in user thread they can throw + * exceptions that will be propagated to the user (what can be seen in {@link + * ExceptionListener#onException(RetriesExhaustedWithDetailsException, BufferedMutator)}'s + * signature). The default exception handler is implemented in such a way, it just throws supplied + * exception. + * + *

The goal of SequentialMirroringBufferedMutator is to mirror only those mutations that were + * successful. For this reason we inject our own error handler to primary BufferedMutator ({@link + * #handlePrimaryException(RetriesExhaustedWithDetailsException)}) that gathers failed mutations + * that shouldn't be mirrored into the secondary (into {@link #failedPrimaryOperations} collection) + * and forwards the exception supplied in the parameter to user's exception handler (as the user + * expects it). If the user-supplied error handler throws an exception, then {@link + * #primaryBufferedMutator#mutate(Mutation)} etc. will also throw. + * + *

We want to forward every exception thrown by user supplied error handler back to the user. + * Those exceptions can be thrown in two places: first - when the user calls {@link + * BufferedMutator#mutate(List)} or {@link BufferedMutator#flush()} on our MirroringBufferedMutator + * and we synchronously call corresponding operation on {@link #primaryBufferedMutator}; second - + * when we perform an asynchronous {@link BufferedMutator#flush()} on primaryBufferedMutator from a + * worker thread. In the first case the exception is rethrown to the user directly from the called + * method. In the second case we gather the asynchronously caught exception in a {@link + * #exceptionsToBeReportedToTheUser} structure and rethrow them on next user interaction with our + * MirroringBufferedMutator. + * + *

In such a way we ensure two things - we are notified about every failed mutation and the user + * receives every exception that was thrown by {@link #primaryBufferedMutator}. + */ +@InternalApi("For internal usage only") +public class SequentialMirroringBufferedMutator extends MirroringBufferedMutator { + /** + * Set of {@link Row}s that were passed to primary BufferedMutator but failed. We create a entry + * in this collection every time our error handler is called by primary BufferedMutator. Those + * entries are consulted before we perform mutations on secondary BufferedMutator, if a {@link + * Row} instance scheduled for insertion is in this collection, then it is omitted and + * corresponding entry is removed from the set. + * + *

This set uses {@code WeakReferences} as keys and compares their content using {@code + * ==} instead of {@code equals}. This is faster than comparing Rows using {@code equals} and is + * safe because we always check if a specific Row object has failed. + */ + private final ConcurrentRowSetWithWeakKeys failedPrimaryOperations = + new ConcurrentRowSetWithWeakKeys(); + /** Stores exceptions thrown by asynchronous operations that were not yet thrown to the user. */ + private final UserExceptionsBuffer exceptionsToBeReportedToTheUser = new UserExceptionsBuffer(); + + private final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer; + private final FlowController flowController; + + public SequentialMirroringBufferedMutator( + Connection primaryConnection, + Connection secondaryConnection, + BufferedMutatorParams bufferedMutatorParams, + MirroringConfiguration configuration, + FlowController flowController, + ExecutorService executorService, + SecondaryWriteErrorConsumer secondaryWriteErrorConsumer, + ReferenceCounter connectionReferenceCounter, + Timestamper timestamper, + MirroringTracer mirroringTracer) + throws IOException { + super( + primaryConnection, + secondaryConnection, + bufferedMutatorParams, + configuration, + executorService, + connectionReferenceCounter, + timestamper, + mirroringTracer); + this.secondaryWriteErrorConsumer = secondaryWriteErrorConsumer; + this.flowController = flowController; + } + + @Override + protected void mutateScoped(final List list) throws IOException { + AccumulatedExceptions primaryExceptions = new AccumulatedExceptions(); + try { + this.mirroringTracer.spanFactory.wrapPrimaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + primaryBufferedMutator.mutate(list); + return null; + } + }, + HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST); + } catch (IOException e) { + primaryExceptions.add(e); + } catch (RuntimeException e) { + primaryExceptions.add(e); + } finally { + try { + // This call might block - we have confirmed that mutate() calls on BufferedMutator from + // HBase client library might also block. + // Submitting write errors to secondaryWriteErrorConsumer in case of exceptions is handled + // by this method. + // InterruptedException is thrown when we receive and interrupt while waiting for flow + // controller resources - we are rethrowing it to the user because the interruption might be + // killing a stuck container. + // ExecutionException is thrown when we have a problem with scheduling secondary operation - + // let's ignore it, the errors were already written to the faillog. + addSecondaryMutation(list); + } catch (InterruptedException e) { + setInterruptedFlagIfInterruptedException(e); + primaryExceptions.add(new IOException(e)); + } catch (ExecutionException e) { + // ignore + } + primaryExceptions.rethrowIfCaptured(); + } + // If primary #mutate() has thrown, the exception is propagated to the user and we don't reach + // here. + // Otherwise we will try to throw exceptions thrown by asynchronous #flush() on primary, if + // there were any. + throwExceptionIfAvailable(); + } + + private void addSecondaryMutation(List mutations) + throws ExecutionException, InterruptedException { + try { + RequestResourcesDescription resourcesDescription = new RequestResourcesDescription(mutations); + ListenableFuture reservationFuture = + flowController.asyncRequestResource(resourcesDescription); + + ResourceReservation reservation; + try (Scope scope = this.mirroringTracer.spanFactory.flowControlScope()) { + reservation = reservationFuture.get(); + } + storeResourcesAndFlushIfNeeded(new Entry(mutations, reservation), resourcesDescription); + } catch (InterruptedException | ExecutionException | RuntimeException e) { + // We won't write those mutations to secondary database, they should be reported to + // secondaryWriteErrorConsumer. + reportWriteErrors(mutations, e); + throw e; + } + } + + @Override + protected void handlePrimaryException(RetriesExhaustedWithDetailsException e) + throws RetriesExhaustedWithDetailsException { + for (int i = 0; i < e.getNumExceptions(); i++) { + this.failedPrimaryOperations.add(e.getRow(i)); + } + this.userListener.onException(e, this); + } + + @Override + protected void handleSecondaryException(RetriesExhaustedWithDetailsException e) { + reportWriteErrors(e); + } + + @Override + protected FlushFutures scheduleFlushScoped( + final List dataToFlush, final FlushFutures previousFlushFutures) { + final SettableFuture secondaryFlushFinished = SettableFuture.create(); + + final ListenableFuture primaryFlushFinished = + schedulePrimaryFlush(previousFlushFutures.primaryFlushFinished); + + final SettableFuture primaryFlushErrorsReported = SettableFuture.create(); + + Futures.addCallback( + primaryFlushFinished, + this.mirroringTracer.spanFactory.wrapWithCurrentSpan( + new FutureCallback() { + @Override + public void onSuccess(@NullableDecl Void aVoid) { + primaryFlushErrorsReported.set(null); + performSecondaryFlush( + dataToFlush, + secondaryFlushFinished, + previousFlushFutures.secondaryFlushFinished); + } + + @Override + public void onFailure(Throwable throwable) { + if (throwable instanceof RetriesExhaustedWithDetailsException) { + // If user-defined listener has thrown an exception + // (RetriesExhaustedWithDetailsException is the only exception that can be + // thrown), we know that some of the writes failed. Our handler has already + // handled those errors. We should also rethrow this exception when user + // calls mutate/flush the next time. + exceptionsToBeReportedToTheUser.addRetriesExhaustedException( + (RetriesExhaustedWithDetailsException) throwable); + primaryFlushErrorsReported.set(null); + + performSecondaryFlush( + dataToFlush, + secondaryFlushFinished, + previousFlushFutures.secondaryFlushFinished); + } else { + // In other cases, we do not know what caused the error and we have no idea + // what was really written to the primary DB. We will behave as if nothing was + // written and throw the exception to the user. Writing mutations to the faillog + // would cause confusion as the user would think that those writes were successful + // on primary, but they were not. + exceptionsToBeReportedToTheUser.addThrowable(throwable); + primaryFlushErrorsReported.set(null); + + releaseReservations(dataToFlush); + secondaryFlushFinished.setException(throwable); + } + } + }), + MoreExecutors.directExecutor()); + return new FlushFutures( + primaryFlushFinished, + secondaryFlushFinished, + // Both flushes have finished when the secondary has finished because flushes are called + // sequentially. + secondaryFlushFinished, + // Flush operation can be unblocked after errors (if any) are stored in buffers. + primaryFlushErrorsReported); + } + + private void performSecondaryFlush( + List dataToFlush, + SettableFuture completionFuture, + ListenableFuture previousFlushCompletedFuture) { + + List mutations = Entry.mergeMutations(dataToFlush); + final List successfulOperations = removeFailedMutations(mutations); + + try { + try { + previousFlushCompletedFuture.get(); + } catch (ExecutionException ignored) { + // InterruptedExceptions are threaded as failed mutations. + // ExecutionExceptions are ignored, we only care if the previous flush finished, not its + // result. + } + + if (!successfulOperations.isEmpty()) { + this.mirroringTracer.spanFactory.wrapSecondaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + secondaryBufferedMutator.mutate(successfulOperations); + return null; + } + }, + HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST); + + this.mirroringTracer.spanFactory.wrapSecondaryOperation( + new CallableThrowingIOException() { + @Override + public Void call() throws IOException { + secondaryBufferedMutator.flush(); + return null; + } + }, + HBaseOperation.BUFFERED_MUTATOR_FLUSH); + } + completionFuture.set(null); + } catch (Throwable e) { + // Our listener is registered and should catch non-fatal errors. This is either + // InterruptedIOException or some RuntimeError, in both cases we should consider operation as + // not completed - the worst that can happen is that we will have some writes in both + // secondary database and on-disk log. + reportWriteErrors(mutations, e); + completionFuture.setException(e); + } finally { + releaseReservations(dataToFlush); + } + } + + private static void releaseReservations(List entries) { + for (Entry entry : entries) { + entry.reservation.release(); + } + } + + /** + * Iterates over {@code dataToFlush} and checks if any of the mutations in it have failed by + * consulting {@link #failedPrimaryOperations} collection. Failed mutations are removed from + * {@link #failedPrimaryOperations}. + * + *

This method is called from secondary thread that performs asynchronous flush() of {@link + * #primaryBufferedMutator} and {@link #failedPrimaryOperations} might contain mutations that are + * not in {@code dataToFlush} parameter (for instance, asynchronous flush was scheduled after one + * call to mutate(), but the user thread performed two mutate() calls before the async task was + * started - the flush() on primary would flush mutations from both mutate() calls, but + * dataToFlush would only contain mutations from the first one). For this reason we cannot just + * clear the {@link #failedPrimaryOperations} collection, if there are some operations left in it + * after this method ends then they will be removed in one of subsequent calls. + * + * @return List of successful mutations. + */ + private List removeFailedMutations(List dataToFlush) { + List successfulMutations = new ArrayList<>(); + for (Mutation mutation : dataToFlush) { + if (!this.failedPrimaryOperations.remove(mutation)) { + successfulMutations.add(mutation); + } + } + return successfulMutations; + } + + @Override + protected void throwExceptionIfAvailable() throws IOException { + this.exceptionsToBeReportedToTheUser.throwAccumulatedExceptions(); + } + + private void reportWriteErrors(RetriesExhaustedWithDetailsException e) { + try (Scope scope = this.mirroringTracer.spanFactory.writeErrorScope()) { + for (int i = 0; i < e.getNumExceptions(); i++) { + this.secondaryWriteErrorConsumer.consume( + HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST, e.getRow(i), e.getCause(i)); + } + } + } + + private void reportWriteErrors(List mutations, Throwable cause) { + try (Scope scope = this.mirroringTracer.spanFactory.writeErrorScope()) { + this.secondaryWriteErrorConsumer.consume( + HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST, mutations, cause); + } + } + + /** + * Set of {@link Row} objects that keeps weak references to them for faster comparision. It is + * threadsafe. + * + *

Underlying map uses {@code WeakReferences} as keys and compares their content using + * {@code ==} instead of {@code equals}. This is faster than comparing Rows using {@code equals} + * and is safe, because we always check if a specific {@link Row} object has failed. + */ + static class ConcurrentRowSetWithWeakKeys { + private final Set set = + Collections.newSetFromMap((new MapMaker()).weakKeys().makeMap()); + + public void add(Row entry) { + set.add(entry); + } + + public boolean remove(Row entry) { + return set.remove(entry); + } + } + + public static class Entry { + public final List mutations; + public final ResourceReservation reservation; + + public Entry(List mutations, ResourceReservation reservation) { + this.mutations = mutations; + this.reservation = reservation; + } + + public static List mergeMutations(List entries) { + List mutations = new ArrayList<>(); + for (Entry e : entries) { + mutations.addAll(e.mutations); + } + return mutations; + } + } + + /** + * Stores exceptions thrown by asynchronous primary mutations that should be reported to the user. + * + *

{@link RetriesExhaustedWithDetailsException} are handled separately because multiple such + * exceptions can be combined into a single exception and throw to the user only once. + * + *

Thread-safe. + */ + private static class UserExceptionsBuffer { + private final Object retriesExhaustedWithDetailsExceptionListLock = new Object(); + /** Thread-safe. */ + private final ConcurrentLinkedQueue otherExceptionsList = + new ConcurrentLinkedQueue<>(); + /** Locked by {@link #retriesExhaustedWithDetailsExceptionListLock} */ + private List retriesExhaustedWithDetailsExceptionList = + new ArrayList<>(); + + private static RetriesExhaustedWithDetailsException mergeRetiresExhaustedExceptions( + List exceptions) { + List rows = new ArrayList<>(); + List causes = new ArrayList<>(); + List hostnames = new ArrayList<>(); + + for (RetriesExhaustedWithDetailsException e : exceptions) { + for (int i = 0; i < e.getNumExceptions(); i++) { + rows.add(e.getRow(i)); + causes.add(e.getCause(i)); + hostnames.add(e.getHostnamePort(i)); + } + } + return new RetriesExhaustedWithDetailsException(causes, rows, hostnames); + } + + public void addRetriesExhaustedException(RetriesExhaustedWithDetailsException e) { + synchronized (this.retriesExhaustedWithDetailsExceptionListLock) { + this.retriesExhaustedWithDetailsExceptionList.add(e); + } + } + + public void addThrowable(Throwable e) { + this.otherExceptionsList.add(e); + } + + /** + * This method first throws oldest non-{@link RetriesExhaustedWithDetailsException} exception. + * If there is no such exception, then all {@link RetriesExhaustedWithDetailsException} are + * accumulated and thrown at once. If no exceptions were accumulated then nothing is thrown. + */ + public void throwAccumulatedExceptions() throws IOException { + IOException exception = getOldestIOException(); + if (exception != null) { + throw exception; + } + + RetriesExhaustedWithDetailsException e = getMergedRetiresExhaustedExceptions(); + if (e != null) { + throw e; + } + } + + private IOException getOldestIOException() { + Throwable operationException = this.otherExceptionsList.poll(); + if (operationException == null) { + return null; + } + if (operationException instanceof IOException) { + return (IOException) operationException; + } + return new IOException(operationException); + } + + private RetriesExhaustedWithDetailsException getMergedRetiresExhaustedExceptions() { + List exceptions; + synchronized (this.retriesExhaustedWithDetailsExceptionListLock) { + if (this.retriesExhaustedWithDetailsExceptionList.isEmpty()) { + return null; + } + exceptions = this.retriesExhaustedWithDetailsExceptionList; + this.retriesExhaustedWithDetailsExceptionList = new ArrayList<>(); + } + + return mergeRetiresExhaustedExceptions(exceptions); + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/AccumulatedExceptions.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/AccumulatedExceptions.java index c47024d6f8..a8664cb3af 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/AccumulatedExceptions.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/AccumulatedExceptions.java @@ -16,6 +16,7 @@ package com.google.cloud.bigtable.mirroring.hbase1_x.utils; import com.google.api.core.InternalApi; +import com.google.common.base.Preconditions; import java.io.IOException; /** @@ -53,7 +54,7 @@ public void rethrowIfCaptured() throws IOException { if (this.exception instanceof IOException) { throw (IOException) this.exception; } else { - assert this.exception instanceof RuntimeException; + Preconditions.checkState(this.exception instanceof RuntimeException); throw (RuntimeException) this.exception; } } @@ -62,7 +63,6 @@ public void rethrowAsRuntimeExceptionIfCaptured() { try { this.rethrowIfCaptured(); } catch (IOException e) { - assert false; throw new RuntimeException(e); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/BatchHelpers.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/BatchHelpers.java index 991c7677fa..48367dae1e 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/BatchHelpers.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/BatchHelpers.java @@ -15,24 +15,38 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils.makePutFromResult; + +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException.ExceptionDetails; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import com.google.common.base.Preconditions; import com.google.common.base.Predicate; import com.google.common.util.concurrent.FutureCallback; import io.opencensus.common.Scope; +import java.io.IOException; import java.lang.reflect.Array; import java.util.ArrayList; +import java.util.IdentityHashMap; import java.util.List; +import java.util.Map; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; import org.apache.hadoop.hbase.client.Table; import org.checkerframework.checker.nullness.compatqual.NullableDecl; public class BatchHelpers { public static FutureCallback createBatchVerificationCallback( - final FailedSuccessfulSplit failedAndSuccessfulPrimaryOperations, + final FailedSuccessfulSplit failedAndSuccessfulPrimaryOperations, final ReadWriteSplit successfulPrimaryReadsAndWrites, final Object[] secondaryResults, final MismatchDetector mismatchDetector, @@ -46,9 +60,9 @@ public void onSuccess(@NullableDecl Void t) { List secondaryOperations = failedAndSuccessfulPrimaryOperations.successfulOperations; - final FailedSuccessfulSplit secondaryFailedAndSuccessfulOperations = + final FailedSuccessfulSplit secondaryFailedAndSuccessfulOperations = new FailedSuccessfulSplit<>( - secondaryOperations, secondaryResults, resultIsFaultyPredicate); + secondaryOperations, secondaryResults, resultIsFaultyPredicate, Object.class); final ReadWriteSplit successfulSecondaryReadsAndWrites = new ReadWriteSplit<>( @@ -64,6 +78,10 @@ public void onSuccess(@NullableDecl Void t) { successfulSecondaryReadsAndWrites.readResults); } } + + if (successfulPrimaryReadsAndWrites.writeOperations.size() > 0) { + mirroringTracer.metricsRecorder.recordSecondaryWriteErrors(HBaseOperation.BATCH, 0); + } } @Override @@ -72,9 +90,9 @@ public void onFailure(Throwable throwable) { List secondaryOperations = failedAndSuccessfulPrimaryOperations.successfulOperations; - final FailedSuccessfulSplit secondaryFailedAndSuccessfulOperations = + final FailedSuccessfulSplit secondaryFailedAndSuccessfulOperations = new FailedSuccessfulSplit<>( - secondaryOperations, secondaryResults, resultIsFaultyPredicate); + secondaryOperations, secondaryResults, resultIsFaultyPredicate, Object.class); final ReadWriteSplit successfulSecondaryReadsAndWrites = new ReadWriteSplit<>( @@ -103,12 +121,14 @@ public void onFailure(Throwable throwable) { // Using those indices we select Get operations that have results from both primary and // secondary database, and pass them to `mismatchDetector.batch()`. // We also gather failed gets to pass them to `batchGetFailure`. - MatchingSuccessfulReadsResults matchingSuccessfulReads = - selectMatchingSuccessfulReads( + SecondaryReadsResults secondaryReadsResults = + selectMatchingSecondaryReads( secondaryOperations, failedAndSuccessfulPrimaryOperations.successfulResults, secondaryResults, resultIsFaultyPredicate); + MatchingSuccessfulReadsResults matchingSuccessfulReads = + secondaryReadsResults.matchingSuccessfulReadsResults; try (Scope scope = mirroringTracer.spanFactory.verificationScope()) { if (!matchingSuccessfulReads.successfulReads.isEmpty()) { @@ -118,8 +138,8 @@ public void onFailure(Throwable throwable) { matchingSuccessfulReads.secondaryResults); } - if (!matchingSuccessfulReads.failedReads.isEmpty()) { - mismatchDetector.batch(matchingSuccessfulReads.failedReads, throwable); + if (!secondaryReadsResults.failedSecondaryReads.isEmpty()) { + mismatchDetector.batch(secondaryReadsResults.failedSecondaryReads, throwable); } } } @@ -147,21 +167,28 @@ private void consumeWriteErrors(List writeOperations, Object[] wr private static class MatchingSuccessfulReadsResults { final Result[] primaryResults; final Result[] secondaryResults; - final List failedReads; final List successfulReads; private MatchingSuccessfulReadsResults( - Result[] primaryResults, - Result[] secondaryResults, - List failedReads, - List successfulReads) { + Result[] primaryResults, Result[] secondaryResults, List successfulReads) { this.primaryResults = primaryResults; this.secondaryResults = secondaryResults; - this.failedReads = failedReads; this.successfulReads = successfulReads; } } + private static class SecondaryReadsResults { + public final MatchingSuccessfulReadsResults matchingSuccessfulReadsResults; + public final List failedSecondaryReads; + + private SecondaryReadsResults( + MatchingSuccessfulReadsResults matchingSuccessfulReadsResults, + List failedSecondaryReads) { + this.matchingSuccessfulReadsResults = matchingSuccessfulReadsResults; + this.failedSecondaryReads = failedSecondaryReads; + } + } + /** * Creates a {@link MatchingSuccessfulReadsResults} based on arrays of results from primary and * secondary databases and list of performed operations. All inputs are iterated simultaneously, @@ -169,13 +196,13 @@ private MatchingSuccessfulReadsResults( * available, they are added to lists of matching reads and successful operations. In the other * case the Get operation is placed on failed operations list. */ - private static MatchingSuccessfulReadsResults selectMatchingSuccessfulReads( - List operations, + private static SecondaryReadsResults selectMatchingSecondaryReads( + List secondaryOperations, Object[] primaryResults, Object[] secondaryResults, Predicate resultIsFaultyPredicate) { - assert operations.size() == secondaryResults.length; - assert primaryResults.length == secondaryResults.length; + Preconditions.checkArgument(secondaryOperations.size() == secondaryResults.length); + Preconditions.checkArgument(primaryResults.length == secondaryResults.length); List primaryMatchingReads = new ArrayList<>(); List secondaryMatchingReads = new ArrayList<>(); @@ -184,43 +211,47 @@ private static MatchingSuccessfulReadsResults selectMatchingSuccessfulReads( List successfulReads = new ArrayList<>(); for (int i = 0; i < secondaryResults.length; i++) { - if (!(operations.get(i) instanceof Get)) { + if (!(secondaryOperations.get(i) instanceof Get)) { continue; } // We are sure casts are correct, and non-failed results to Gets are always Results. if (resultIsFaultyPredicate.apply(secondaryResults[i])) { - failedReads.add((Get) operations.get(i)); + failedReads.add((Get) secondaryOperations.get(i)); } else { primaryMatchingReads.add((Result) primaryResults[i]); secondaryMatchingReads.add((Result) secondaryResults[i]); - successfulReads.add((Get) operations.get(i)); + successfulReads.add((Get) secondaryOperations.get(i)); } } - return new MatchingSuccessfulReadsResults( - primaryMatchingReads.toArray(new Result[0]), - secondaryMatchingReads.toArray(new Result[0]), - failedReads, - successfulReads); + return new SecondaryReadsResults( + new MatchingSuccessfulReadsResults( + primaryMatchingReads.toArray(new Result[0]), + secondaryMatchingReads.toArray(new Result[0]), + successfulReads), + failedReads); } /** * Helper class facilitating analysis of {@link Table#batch(List, Object[])} results. Splits * operations and corresponding results into failed and successful based on contents of results. */ - public static class FailedSuccessfulSplit { - public final List successfulOperations = new ArrayList<>(); - public final Result[] successfulResults; - public final List failedOperations = new ArrayList<>(); + public static class FailedSuccessfulSplit { + public final List successfulOperations = new ArrayList<>(); + public final SuccessfulResultType[] successfulResults; + public final List failedOperations = new ArrayList<>(); public final Object[] failedResults; public FailedSuccessfulSplit( - List operations, Object[] results, Predicate resultIsFaultyPredicate) { - List successfulResultsList = new ArrayList<>(); + List operations, + Object[] results, + Predicate resultIsFaultyPredicate, + Class successfulResultTypeClass) { + List successfulResultsList = new ArrayList<>(); List failedResultsList = new ArrayList<>(); for (int i = 0; i < operations.size(); i++) { - T operation = operations.get(i); + OperationType operation = operations.get(i); Object result = results[i]; boolean isFailed = resultIsFaultyPredicate.apply(result); if (isFailed) { @@ -228,10 +259,12 @@ public FailedSuccessfulSplit( failedResultsList.add(result); } else { successfulOperations.add(operation); - successfulResultsList.add((Result) result); + successfulResultsList.add((SuccessfulResultType) result); } } - this.successfulResults = successfulResultsList.toArray(new Result[0]); + this.successfulResults = + successfulResultsList.toArray( + (SuccessfulResultType[]) Array.newInstance(successfulResultTypeClass, 0)); this.failedResults = failedResultsList.toArray(new Object[0]); } } @@ -271,4 +304,282 @@ public ReadWriteSplit( this.writeResults = writeResultsList.toArray(new Object[0]); } } + + /** + * Analyses results of two batch operations run concurrently and gathers results into {@code + * outputResult} array. + * + *

If there were any failed operations in one of the batches a {@link + * RetriesExhaustedWithDetailsException} is thrown. Exceptions stored inside the thrown exception + * and in {@code outputResults} are marked with {@link MirroringOperationException} denoting + * whether operation have failed on primary, on secondary or on both databases. + */ + public static void reconcileBatchResultsConcurrent( + Object[] outputResults, + BatchData primaryBatchData, + BatchData secondaryBatchData, + Predicate resultIsFaultyPredicate) + throws RetriesExhaustedWithDetailsException { + List failedRows = new ArrayList<>(); + List failureCauses = new ArrayList<>(); + List hostnameAndPorts = new ArrayList<>(); + + Map failedPrimaryOperations = makeMapOfFailedRows(primaryBatchData); + Map failedSecondaryOperations = makeMapOfFailedRows(secondaryBatchData); + + if (failedPrimaryOperations.isEmpty() && failedSecondaryOperations.isEmpty()) { + // No errors, return early to skip unnecessary computation. + // This is the common case. + return; + } + + Preconditions.checkArgument( + primaryBatchData.operations.size() == secondaryBatchData.operations.size()); + for (int index = 0; index < primaryBatchData.operations.size(); index++) { + Object primaryResult = primaryBatchData.results[index]; + Object secondaryResult = secondaryBatchData.results[index]; + boolean primaryOperationFailed = resultIsFaultyPredicate.apply(primaryResult); + boolean secondaryOperationFailed = resultIsFaultyPredicate.apply(secondaryResult); + if (!primaryOperationFailed && !secondaryOperationFailed) { + continue; + } + Row primaryOperation = primaryBatchData.operations.get(index); + Row secondaryOperation = secondaryBatchData.operations.get(index); + ExceptionDetails primaryExceptionDetails = + getExceptionDetails(failedPrimaryOperations, primaryOperation); + ExceptionDetails secondaryExceptionDetails = + getExceptionDetails(failedSecondaryOperations, secondaryOperation); + + Throwable exception; + String hostnameAndPort; + if (primaryOperationFailed && secondaryOperationFailed) { + exception = + MirroringOperationException.markedAsBothException( + primaryExceptionDetails.exception, secondaryExceptionDetails, secondaryOperation); + hostnameAndPort = primaryExceptionDetails.hostnameAndPort; + } else if (primaryOperationFailed) { + exception = + MirroringOperationException.markedAsPrimaryException( + primaryExceptionDetails.exception, primaryOperation); + hostnameAndPort = primaryExceptionDetails.hostnameAndPort; + } else { // secondaryOperationFailed + exception = + MirroringOperationException.markedAsSecondaryException( + secondaryExceptionDetails.exception, secondaryOperation); + hostnameAndPort = secondaryExceptionDetails.hostnameAndPort; + } + outputResults[index] = exception; + failureCauses.add(exception); + failedRows.add(primaryOperation); + hostnameAndPorts.add(hostnameAndPort); + } + if (!failedRows.isEmpty()) { + throw new RetriesExhaustedWithDetailsException(failureCauses, failedRows, hostnameAndPorts); + } + } + + /** + * Analyses results of two batch operations run sequentially (failed primary operation were not + * mirrored to secondary) and gathers results and errors in {@code outputResults} array. + * + *

If there were any failed operations in one of the batches a {@link + * RetriesExhaustedWithDetailsException} is thrown. Exceptions stored inside the thrown exception + * and in {@code outputResults} are marked with {@link MirroringOperationException} denoting + * whether operation have failed on primary or on secondary database. + */ + public static void reconcileBatchResultsSequential( + Object[] outputResults, + BatchData primaryBatchData, + BatchData secondaryBatchData, + Predicate resultIsFaultyPredicate) + throws RetriesExhaustedWithDetailsException { + List failedRows = new ArrayList<>(); + List failureCauses = new ArrayList<>(); + List hostnameAndPorts = new ArrayList<>(); + + Map failedPrimaryOperations = makeMapOfFailedRows(primaryBatchData); + Map failedSecondaryOperations = makeMapOfFailedRows(secondaryBatchData); + + if (failedPrimaryOperations.isEmpty() && failedSecondaryOperations.isEmpty()) { + // No errors, return early to skip unnecessary computation. + // This is the common case. + return; + } + + Preconditions.checkArgument( + primaryBatchData.operations.size() >= secondaryBatchData.operations.size()); + + // sizes are not equal, one or more of the following is possible + // - primary has reads that were excluded from secondary, + // - there were operations that failed on primary and were excluded from secondary. + // We match results from primary with corresponding result from secondary. + int primaryIndex = 0; + int secondaryIndex = 0; + + while (primaryIndex < primaryBatchData.operations.size()) { + boolean primaryOperationFailed = + resultIsFaultyPredicate.apply(primaryBatchData.results[primaryIndex]); + + // failed operations are always excluded from secondary. + if (primaryOperationFailed) { + Row operation = primaryBatchData.operations.get(primaryIndex); + failedRows.add(operation); + + ExceptionDetails exceptionDetails = getExceptionDetails(failedPrimaryOperations, operation); + + Throwable exception = + MirroringOperationException.markedAsPrimaryException( + exceptionDetails.exception, operation); + failureCauses.add(exception); + outputResults[primaryIndex] = exception; + hostnameAndPorts.add(exceptionDetails.hostnameAndPort); + primaryIndex++; + continue; + } + // Primary operation was successful, it might have been excluded from secondary if it was a + // read. We assume that either all successful reads are excluded or none of them. + boolean primaryIsRead = primaryBatchData.operations.get(primaryIndex) instanceof Get; + boolean secondaryIsRead = secondaryBatchData.operations.get(secondaryIndex) instanceof Get; + if (primaryIsRead && !secondaryIsRead) { + // read was excluded + primaryIndex++; + continue; + } + + // Otherwise a successful write was excluded, which is not possible. + Preconditions.checkState(primaryIsRead == secondaryIsRead); + + boolean secondaryOperationFailed = + resultIsFaultyPredicate.apply(secondaryBatchData.results[secondaryIndex]); + if (secondaryOperationFailed) { + Row primaryOperation = primaryBatchData.operations.get(primaryIndex); + Row secondaryOperation = secondaryBatchData.operations.get(secondaryIndex); + failedRows.add(primaryOperation); + ExceptionDetails exceptionDetails = + getExceptionDetails(failedSecondaryOperations, secondaryOperation); + Throwable exception = + MirroringOperationException.markedAsSecondaryException( + exceptionDetails.exception, secondaryOperation); + failureCauses.add(exception); + outputResults[primaryIndex] = exception; + hostnameAndPorts.add(exceptionDetails.hostnameAndPort); + } + primaryIndex++; + secondaryIndex++; + } + if (!failedRows.isEmpty()) { + throw new RetriesExhaustedWithDetailsException(failureCauses, failedRows, hostnameAndPorts); + } + } + + private static ExceptionDetails getExceptionDetails(Map map, Row key) { + ExceptionDetails value = map.get(key); + if (value == null) { + return new ExceptionDetails(new IOException("no details")); + } + return value; + } + + private static Map makeMapOfFailedRows(BatchData primaryBatchData) { + IdentityHashMap result = new IdentityHashMap<>(); + + if (primaryBatchData.exception == null) { + return result; + } + + if (primaryBatchData.exception instanceof RetriesExhaustedWithDetailsException) { + RetriesExhaustedWithDetailsException exception = + (RetriesExhaustedWithDetailsException) primaryBatchData.exception; + for (int i = 0; i < exception.getNumExceptions(); i++) { + result.put( + exception.getRow(i), + new ExceptionDetails(exception.getCause(i), exception.getHostnamePort(i))); + } + } else { + for (Row r : primaryBatchData.operations) { + result.put(r, new ExceptionDetails(primaryBatchData.exception)); + } + } + return result; + } + + public static class BatchData { + private final List operations; + private final Object[] results; + private Throwable exception; + + public BatchData(List operations, Object[] results) { + this.operations = operations; + this.results = results; + } + + public List getOperations() { + return operations; + } + + public Object[] getResults() { + return results; + } + + public Throwable getException() { + return exception; + } + + public void setException(Throwable t) { + this.exception = t; + } + } + + public static boolean canBatchBePerformedConcurrently(List operations) { + // Only Puts and Deletes can be performed concurrently. + // We assume that RowMutations can consist of only Puts and Deletes (which is true in HBase 1.x + // and 2.x). + for (Row operation : operations) { + if (!(operation instanceof Put) + && !(operation instanceof Delete) + && !(operation instanceof RowMutations)) { + return false; + } + } + return true; + } + + @SuppressWarnings("unchecked") + public static List rewriteIncrementsAndAppendsAsPuts( + List successfulOperations, Object[] successfulResults) { + List rewrittenRows = new ArrayList<>(successfulOperations.size()); + for (int i = 0; i < successfulOperations.size(); i++) { + ActionType operation = successfulOperations.get(i); + if (operation instanceof Increment || operation instanceof Append) { + Result result = (Result) successfulResults[i]; + // This would fail iff ActionType == Increment || ActionType == Append, but if any of + // operations is an Increment or an Append, then we are performing a batch and ActionType == + // Row + rewrittenRows.add((ActionType) makePutFromResult(result)); + } else { + rewrittenRows.add(operation); + } + } + return rewrittenRows; + } + + public static + FailedSuccessfulSplit createOperationsSplit( + List operations, + Object[] results, + Predicate resultIsFaultyPredicate, + Class resultTypeClass, + boolean skipReads) { + if (skipReads) { + ReadWriteSplit readWriteSplit = + new ReadWriteSplit<>(operations, results, resultTypeClass); + return new FailedSuccessfulSplit<>( + readWriteSplit.writeOperations, + readWriteSplit.writeResults, + resultIsFaultyPredicate, + resultTypeClass); + } + return new FailedSuccessfulSplit<>( + operations, results, resultIsFaultyPredicate, resultTypeClass); + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Batcher.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Batcher.java new file mode 100644 index 0000000000..faeb957c88 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Batcher.java @@ -0,0 +1,476 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.canBatchBePerformedConcurrently; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.reconcileBatchResultsConcurrent; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.reconcileBatchResultsSequential; + +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable.RequestScheduler; +import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncTableWrapper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.BatchData; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.FailedSuccessfulSplit; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.ReadWriteSplit; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils.RewrittenIncrementAndAppendIndicesInfo; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.VerificationContinuationFactory; +import com.google.common.base.Function; +import com.google.common.base.Preconditions; +import com.google.common.base.Predicate; +import com.google.common.base.Supplier; +import com.google.common.util.concurrent.FutureCallback; +import com.google.common.util.concurrent.ListenableFuture; +import com.google.common.util.concurrent.MoreExecutors; +import com.google.common.util.concurrent.SettableFuture; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.ExecutionException; +import javax.annotation.Nullable; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.coprocessor.Batch.Callback; +import org.checkerframework.checker.nullness.compatqual.NullableDecl; + +/** + * Helper class that handles logic related to mirroring {@link Table#batch(List, Object[])} + * operation. Handles performing both sequential and concurrent operations. + * + *

Uses static helper methods from {@link BatchHelpers}. + * + *

Batch is complicated - it handles writes and reads, both of them are handled differently + * depending on whether they succeed or fail and whether that happens on the primary or on the + * secondary database. HBase API returns batch results as an array with a result at index i + * corresponding to operation at index i in input operation list. For these reasons we have to + * perform matching of operations and their results after primary and secondary operation and select + * operations that were successful to be performed at the secondary database. Moreover, in case of + * concurrent mode we have to reconcile errors that were reported from either of the databases back + * into result for the user. All those reasons together make this code quite complicated and we + * currently cannot find simpler way of implementing it without splitting it into multiple batch + * calls. + * + *

TODO: Simplify this code. + */ +public class Batcher { + private static final Logger Log = new Logger(Batcher.class); + + private final Table primaryTable; + private final AsyncTableWrapper secondaryAsyncWrapper; + private final RequestScheduler requestScheduler; + private final SecondaryWriteErrorConsumer secondaryWriteErrorConsumer; + private final VerificationContinuationFactory verificationContinuationFactory; + private final ReadSampler readSampler; + private final Predicate resultIsFaultyPredicate; + private final boolean waitForSecondaryWrites; + private final boolean performWritesConcurrently; + private final MirroringTracer mirroringTracer; + private final Timestamper timestamper; + + public Batcher( + Table primaryTable, + AsyncTableWrapper secondaryAsyncWrapper, + RequestScheduler requestScheduler, + SecondaryWriteErrorConsumer secondaryWriteErrorConsumer, + VerificationContinuationFactory verificationContinuationFactory, + ReadSampler readSampler, + Timestamper timestamper, + Predicate resultIsFaultyPredicate, + boolean waitForSecondaryWrites, + boolean performWritesConcurrently, + MirroringTracer mirroringTracer) { + this.primaryTable = primaryTable; + this.secondaryAsyncWrapper = secondaryAsyncWrapper; + this.requestScheduler = requestScheduler; + this.secondaryWriteErrorConsumer = secondaryWriteErrorConsumer; + this.verificationContinuationFactory = verificationContinuationFactory; + this.readSampler = readSampler; + this.resultIsFaultyPredicate = resultIsFaultyPredicate; + this.waitForSecondaryWrites = waitForSecondaryWrites; + this.performWritesConcurrently = performWritesConcurrently; + this.mirroringTracer = mirroringTracer; + this.timestamper = timestamper; + } + + // TODO: Point writes shouldn't be implemented using batch. + // It was noticed awhile back that there was a performance penalty in BigTable when performing + // point writes as a batch writes (not sure if its still the case). It is also misleading in + // metrics. + public void batchSingleWriteOperation(Row operation) throws IOException { + Object[] results = new Object[1]; + try { + batch(Collections.singletonList(operation), results); + } catch (RetriesExhaustedWithDetailsException e) { + Throwable exception = e.getCause(0); + if (exception instanceof IOException) { + throw (IOException) exception; + } + throw new IOException(exception); + } catch (InterruptedException e) { + InterruptedIOException interruptedIOException = new InterruptedIOException(); + interruptedIOException.initCause(e); + throw interruptedIOException; + } + } + + public void batch(final List inputOperations, final Object[] results) + throws IOException, InterruptedException { + batch(inputOperations, results, null); + } + + /** + * Performs batch operation as defined by HBase API. {@code results} array will contain instances + * of {@link Result} for successful operations and {@code null} or {@link Throwable} for + * operations that have failed (this behavior is not documented, but both hbase and java-bigtable + * clients work this way). Moreover, if any of operations in batch have failed, {@link + * RetriesExhaustedWithDetailsException} will be thrown with details of failed operations (which + * is also not documented both clients consistently throw this exception). + */ + public void batch( + final List inputOperations, + final Object[] results, + @Nullable final Callback callback) + throws IOException, InterruptedException { + List timestampedInputOperations = timestamper.fillTimestamp(inputOperations); + final RewrittenIncrementAndAppendIndicesInfo actions = + new RewrittenIncrementAndAppendIndicesInfo<>(timestampedInputOperations); + Log.trace( + "[%s] batch(operations=%s, results)", this.primaryTable.getName(), actions.operations); + + // We store batch results in a internal variable to prevent the user from modifying it when it + // might still be used by asynchronous secondary operation. + final Object[] internalPrimaryResults = new Object[results.length]; + + CallableThrowingIOAndInterruptedException primaryOperation = + new CallableThrowingIOAndInterruptedException() { + @Override + public Void call() throws IOException, InterruptedException { + if (callback == null) { + primaryTable.batch(actions.operations, internalPrimaryResults); + } else { + primaryTable.batchCallback(actions.operations, internalPrimaryResults, callback); + } + return null; + } + }; + + try { + if (!this.performWritesConcurrently || !canBatchBePerformedConcurrently(actions.operations)) { + sequentialBatch(internalPrimaryResults, actions.operations, primaryOperation); + } else { + concurrentBatch(internalPrimaryResults, actions.operations, primaryOperation); + } + } finally { + actions.discardUnwantedResults(internalPrimaryResults); + System.arraycopy(internalPrimaryResults, 0, results, 0, results.length); + } + } + + private void sequentialBatch( + Object[] results, + List operations, + CallableThrowingIOAndInterruptedException primaryOperation) + throws IOException, InterruptedException { + if (this.waitForSecondaryWrites) { + sequentialSynchronousBatch(results, operations, primaryOperation); + } else { + sequentialAsynchronousBatch(results, operations, primaryOperation); + } + } + + /** + * Performs batch of {@code operations} synchronously on primary and asynchronously on secondary. + * + *

Operations that failed on primary are not mirrored on secondary to prevent ghost writes to + * secondary. + * + *

{@code results} contains results of primary batch and any exception thrown by primary batch + * is forwarded to the user as-is. + * + *

This mode doesn't incur any additional latency and prevents ghost writes, but the user + * cannot handle secondary errors manually, they are always handled by {@link + * SecondaryWriteErrorConsumer}. + */ + private void sequentialAsynchronousBatch( + Object[] results, + List operations, + CallableThrowingIOAndInterruptedException primaryOperation) + throws IOException, InterruptedException { + try { + this.mirroringTracer.spanFactory.wrapPrimaryOperation(primaryOperation, HBaseOperation.BATCH); + } finally { + scheduleSecondaryWriteBatchOperations(operations, results); + } + } + + /** + * Performs batch of {@code operations} synchronously on primary and on secondary. + * + *

Operations that failed on primary are not mirrored on secondary to prevent ghost writes to + * secondary. + * + *

{@code results} contains results of operations that succeeded on both databases and {@link + * Throwable}s for operations that failed on one of the databases. {@code Throwable}s are marked + * with {@link MirroringOperationException} denoting which database rejected the operation. {@link + * RetriesExhaustedWithDetailsException} is thrown by this method in case of any failures, causes + * in that exception are also marked with {@link MirroringOperationException}. + * + *

This mode incurs additional latency and prevents ghost writes, but the user can handle + * secondary errors manually, but they are also passed to {@link SecondaryWriteErrorConsumer}. + */ + private void sequentialSynchronousBatch( + Object[] results, + List operations, + CallableThrowingIOAndInterruptedException primaryOperation) + throws IOException, InterruptedException { + BatchData primaryBatchData = new BatchData(operations, results); + try { + this.mirroringTracer.spanFactory.wrapPrimaryOperation(primaryOperation, HBaseOperation.BATCH); + } catch (RetriesExhaustedWithDetailsException e) { + primaryBatchData.setException(e); + } catch (InterruptedException e) { + throw MirroringOperationException.markedAsPrimaryException(e, null); + } catch (IOException e) { + throw MirroringOperationException.markedAsPrimaryException(e, null); + } + + ListenableFuture secondaryResult = + scheduleSecondaryWriteBatchOperations(operations, results); + + BatchData secondaryBatchData; + try { + secondaryBatchData = secondaryResult.get(); + } catch (ExecutionException e) { + throw new IllegalStateException("secondaryResult thrown unexpected exception."); + } + reconcileBatchResultsSequential( + results, primaryBatchData, secondaryBatchData, resultIsFaultyPredicate); + } + + private ListenableFuture scheduleSecondaryWriteBatchOperations( + final List operations, final Object[] results) { + final SettableFuture result = SettableFuture.create(); + + boolean skipReads = !readSampler.shouldNextReadOperationBeSampled(); + final FailedSuccessfulSplit failedSuccessfulSplit = + BatchHelpers.createOperationsSplit( + operations, results, resultIsFaultyPredicate, Result.class, skipReads); + + if (failedSuccessfulSplit.successfulOperations.size() == 0) { + result.set(new BatchData(Collections.emptyList(), new Object[0])); + return result; + } + + List operationsToScheduleOnSecondary = + BatchHelpers.rewriteIncrementsAndAppendsAsPuts( + failedSuccessfulSplit.successfulOperations, failedSuccessfulSplit.successfulResults); + + final Object[] resultsSecondary = new Object[operationsToScheduleOnSecondary.size()]; + + final BatchData secondaryBatchData = + new BatchData(operationsToScheduleOnSecondary, resultsSecondary); + + // List of writes created by this call contains Puts instead of Increments and Appends and it + // can be passed to secondaryWriteErrorConsumer. + final ReadWriteSplit successfulReadWriteSplit = + new ReadWriteSplit<>( + failedSuccessfulSplit.successfulOperations, + failedSuccessfulSplit.successfulResults, + Result.class); + + final FutureCallback verificationFuture = + BatchHelpers.createBatchVerificationCallback( + failedSuccessfulSplit, + successfulReadWriteSplit, + resultsSecondary, + verificationContinuationFactory.getMismatchDetector(), + this.secondaryWriteErrorConsumer, + resultIsFaultyPredicate, + this.mirroringTracer); + + FutureCallback verificationCallback = + new FutureCallback() { + @Override + public void onSuccess(@NullableDecl Void aVoid) { + verificationFuture.onSuccess(aVoid); + } + + @Override + public void onFailure(Throwable throwable) { + secondaryBatchData.setException(throwable); + verificationFuture.onFailure(throwable); + } + }; + + RequestResourcesDescription requestResourcesDescription = + new RequestResourcesDescription( + operationsToScheduleOnSecondary, successfulReadWriteSplit.readResults); + + // If flow controller errs and won't allow the request we will handle the error using this + // handler. + Function flowControlReservationErrorConsumer = + new Function() { + @Override + public Void apply(Throwable throwable) { + secondaryBatchData.setException(throwable); + secondaryWriteErrorConsumer.consume( + HBaseOperation.BATCH, successfulReadWriteSplit.writeOperations, throwable); + return null; + } + }; + + ListenableFuture verificationCompleted = + this.requestScheduler.scheduleRequestWithCallback( + requestResourcesDescription, + this.secondaryAsyncWrapper.batch(operationsToScheduleOnSecondary, resultsSecondary), + verificationCallback, + flowControlReservationErrorConsumer); + + verificationCompleted.addListener( + new Runnable() { + @Override + public void run() { + result.set(secondaryBatchData); + } + }, + MoreExecutors.directExecutor()); + + return result; + } + + /** + * Runs batch operation concurrently on both primary and secondary databases and waits for both of + * them to finish. {@code primaryResults} parameter will contain correct {@link Result}s if + * corresponding operation was successful on both databases and a {@link Throwable} marked with + * {@link MirroringOperationException} if the operation failed on any database. If any operation + * failed this method will throw appropriate {@link RetriesExhaustedWithDetailsException} with + * causes marked with {@link MirroringOperationException}. The user can use those markers to + * handle exceptions on primary and secondary database according to their needs. + * + *

This mode allows the user to manually handle errors on secondary database without additional + * latency introduced by sequential synchronous mode, but it comes at a price of additional + * inconsistency between primary and secondary databases - some of the writes might fail on + * primary but succeed on primary. + * + *

This mode doesn't use {@link SecondaryWriteErrorConsumer} to handle failed writes on + * secondary, errors are reported to the user as exceptions. + * + *

Only {@link org.apache.hadoop.hbase.client.Put}s, {@link + * org.apache.hadoop.hbase.client.Delete}s and {@link org.apache.hadoop.hbase.client.RowMutations} + * that consist of them can be executed concurrently, if {@code operations} contain any other + * operation this method shouldn't be called. + */ + private void concurrentBatch( + final Object[] primaryResults, + final List operations, + final CallableThrowingIOAndInterruptedException primaryOperation) + throws IOException, InterruptedException { + Preconditions.checkArgument(this.waitForSecondaryWrites && this.performWritesConcurrently); + + RequestResourcesDescription requestResourcesDescription = + new RequestResourcesDescription(operations, new Result[0]); + final Object[] secondaryResults = new Object[operations.size()]; + final Throwable[] flowControllerException = new Throwable[1]; + + final BatchData primaryBatchData = new BatchData(operations, primaryResults); + final BatchData secondaryBatchData = new BatchData(operations, secondaryResults); + // This is a operation that will be run by + // `RequestScheduler#scheduleRequestWithCallback` after it acquires flow controller resources. + // It will schedule asynchronous secondary operation and run primary operation in the main + // thread, to make them run concurrently. We will wait for the secondary to finish later in + // this + // method. + final Supplier> invokeBothOperations = + new Supplier>() { + @Override + public ListenableFuture get() { + // We are scheduling secondary batch to run concurrently. + // Call to `.get()` starts the asynchronous operation, it doesn't wait for it to + // finish. + ListenableFuture secondaryOperationEnded = + secondaryAsyncWrapper.batch(operations, secondaryResults).get(); + // Primary operation is then performed synchronously. + try { + primaryOperation.call(); + } catch (IOException | InterruptedException e) { + primaryBatchData.setException(e); + } + // Primary operation has ended and its results are available to the user. + + // We want the schedule verification to after the secondary operation. + return secondaryOperationEnded; + } + }; + + // Concurrent writes are also synchronous, errors will be thrown to the user after both ops + // finish. + FutureCallback verification = + new FutureCallback() { + @Override + public void onSuccess(@NullableDecl Void result) {} + + @Override + public void onFailure(Throwable throwable) { + secondaryBatchData.setException(throwable); + } + }; + + // If flow controller errs and won't allow the request we will handle the error using this + // handler. + Function flowControlReservationErrorConsumer = + new Function() { + @NullableDecl + @Override + public Void apply(@NullableDecl Throwable throwable) { + flowControllerException[0] = throwable; + return null; + } + }; + + ListenableFuture verificationCompleted = + this.requestScheduler.scheduleRequestWithCallback( + requestResourcesDescription, + invokeBothOperations, + verification, + flowControlReservationErrorConsumer); + + try { + // Wait until all asynchronous operations are completed. + verificationCompleted.get(); + } catch (ExecutionException e) { + throw new IllegalStateException("secondaryResult thrown unexpected exception."); + } + + // Checks results of primary and secondary operations, we consider a operation failed if at + // least one of the operations have failed. This method will fill `primaryResults` with errors + // from both operations and will throw appropriate RetriesExhaustedWithDetailsException. + reconcileBatchResultsConcurrent( + primaryResults, primaryBatchData, secondaryBatchData, resultIsFaultyPredicate); + + if (flowControllerException[0] != null) { + throw MirroringOperationException.markedAsBothException( + new IOException("FlowController rejected the request", flowControllerException[0]), + null, + null); + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOAndInterruptedException.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOAndInterruptedException.java index a07c7ca0fb..6d9779f3e4 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOAndInterruptedException.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOAndInterruptedException.java @@ -17,8 +17,16 @@ import com.google.api.core.InternalApi; import java.io.IOException; +import java.util.List; import java.util.concurrent.Callable; +import org.apache.hadoop.hbase.client.Table; +/** + * A specialization of {@link Callable} that tightens list of thrown exceptions to {@link + * IOException} and {@link InterruptedException}, which are only exceptions thrown by some of + * operations in HBase API ({@link Table#batch(List, Object[])}). Facilitates error handling when + * such operations are used as callbacks. + */ @InternalApi("For internal usage only") public interface CallableThrowingIOAndInterruptedException extends Callable { T call() throws IOException, InterruptedException; diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOException.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOException.java index 88afd001fc..fb478f5fd6 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOException.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/CallableThrowingIOException.java @@ -17,7 +17,15 @@ import com.google.api.core.InternalApi; import java.io.IOException; +import java.util.concurrent.Callable; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Table; +/** + * A specialization of {@link Callable} that tightens list of thrown exceptions to {@link + * IOException}, which is the only exceptions thrown by some of operations in HBase API ({@link + * Table#put(Put)}). Facilitates error handling when such operations are used as callbacks. + */ @InternalApi("For internal usage only") public interface CallableThrowingIOException extends CallableThrowingIOAndInterruptedException { diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Comparators.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Comparators.java index dd8ced01b6..d297cccde1 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Comparators.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/Comparators.java @@ -16,13 +16,34 @@ package com.google.cloud.bigtable.mirroring.hbase1_x.utils; import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.compat.CellComparatorCompat; import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.client.Result; @InternalApi("For internal usage only") public class Comparators { + private static CellComparatorCompat cellComparator; + + static { + // Try to construct 2.x CellComparator compatibility wrapper if available. + final String comparatorCompat1xImplClass = + "com.google.cloud.bigtable.mirroring.hbase1_x.utils.compat.CellComparatorCompatImpl"; + final String comparatorCompat2xImplClass = + "com.google.cloud.bigtable.mirroring.hbase2_x.utils.compat.CellComparatorCompatImpl"; + try { + cellComparator = + (CellComparatorCompat) Class.forName(comparatorCompat2xImplClass).newInstance(); + } catch (ClassNotFoundException | InstantiationException | IllegalAccessException e) { + try { + cellComparator = + (CellComparatorCompat) Class.forName(comparatorCompat1xImplClass).newInstance(); + } catch (ClassNotFoundException | InstantiationException | IllegalAccessException ex) { + throw new IllegalStateException(ex); + } + } + } + public static boolean resultsEqual(Result result1, Result result2) { if (result1 == null && result2 == null) { return true; @@ -30,11 +51,6 @@ public static boolean resultsEqual(Result result1, Result result2) { if (result1 == null || result2 == null) { return false; } - int rowsComparisionResult = - CellComparator.compareRows(result1.getRow(), 0, 0, result2.getRow(), 0, 0); - if (rowsComparisionResult != 0) { - return false; - } Cell[] cells1 = result1.rawCells(); Cell[] cells2 = result2.rawCells(); @@ -49,8 +65,17 @@ public static boolean resultsEqual(Result result1, Result result2) { if (cells1.length != cells2.length) { return false; } + for (int i = 0; i < cells1.length; i++) { - int cellResult = CellComparator.compare(cells1[i], cells2[i], true); + if (cells1[i] == null && cells2[i] == null) { + continue; + } + + if (cells1[i] == null || cells2[i] == null) { + return false; + } + + int cellResult = cellComparator.compareCells(cells1[i], cells2[i]); if (cellResult != 0) { return false; } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/DefaultSecondaryWriteErrorConsumer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/DefaultSecondaryWriteErrorConsumer.java index 853edcea58..6893eba3b0 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/DefaultSecondaryWriteErrorConsumer.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/DefaultSecondaryWriteErrorConsumer.java @@ -15,26 +15,30 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Logger; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.FailedMutationLogger; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import java.util.List; import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Row; import org.apache.hadoop.hbase.client.RowMutations; +/** + * Default implementation of {@link SecondaryWriteErrorConsumer} which forwards write errors to + * {@link FailedMutationLogger}. + */ public class DefaultSecondaryWriteErrorConsumer implements SecondaryWriteErrorConsumer { private static final com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger Log = new com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger( DefaultSecondaryWriteErrorConsumer.class); - private final Logger writeErrorLogger; + private final FailedMutationLogger failedMutationLogger; - public DefaultSecondaryWriteErrorConsumer(Logger writeErrorLogger) { - this.writeErrorLogger = writeErrorLogger; + public DefaultSecondaryWriteErrorConsumer(FailedMutationLogger failedMutationLogger) { + this.failedMutationLogger = failedMutationLogger; } private void consume(Mutation r, Throwable cause) { try { - writeErrorLogger.mutationFailed(r, cause); + failedMutationLogger.mutationFailed(r, cause); } catch (InterruptedException e) { Log.error( "Writing mutation that failed on secondary database to faillog interrupted: mutation=%s, failure_cause=%s, exception=%s", @@ -56,7 +60,6 @@ public void consume(HBaseOperation operation, Row row, Throwable cause) { } else if (row instanceof RowMutations) { consume((RowMutations) row, cause); } else { - assert false; throw new IllegalArgumentException("Not a write operation"); } } @@ -67,4 +70,11 @@ public void consume(HBaseOperation operation, List operations, Th consume(operation, row, cause); } } + + public static class Factory implements SecondaryWriteErrorConsumer.Factory { + @Override + public SecondaryWriteErrorConsumer create(FailedMutationLogger failedMutationLogger) { + return new DefaultSecondaryWriteErrorConsumer(failedMutationLogger); + } + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/MirroringConfigurationHelper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/MirroringConfigurationHelper.java index b7c759b52a..4b371afc48 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/MirroringConfigurationHelper.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/MirroringConfigurationHelper.java @@ -17,11 +17,21 @@ import static com.google.common.base.Preconditions.checkArgument; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringResultScanner; +import com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutator; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Appender; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.DefaultAppender; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.DefaultSerializer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Serializer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestCountingFlowControlStrategy; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.DefaultMismatchDetector; import java.util.List; import java.util.Map; import java.util.Objects; import java.util.regex.Pattern; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.client.BufferedMutator; +import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; @@ -31,6 +41,8 @@ public class MirroringConfigurationHelper { * Key to set to a name of Connection class that should be used to connect to primary database. It * is used as hbase.client.connection.impl when creating connection to primary database. Set to * {@code default} to use default HBase connection class. + * + *

Required. */ public static final String MIRRORING_PRIMARY_CONNECTION_CLASS_KEY = "google.bigtable.mirroring.primary-client.connection.impl"; @@ -39,6 +51,8 @@ public class MirroringConfigurationHelper { * Key to set to a name of Connection class that should be used to connect to secondary database. * It is used as hbase.client.connection.impl when creating connection to secondary database. Set * to an {@code default} to use default HBase connection class. + * + *

Required. */ public static final String MIRRORING_SECONDARY_CONNECTION_CLASS_KEY = "google.bigtable.mirroring.secondary-client.connection.impl"; @@ -47,6 +61,8 @@ public class MirroringConfigurationHelper { * Key to set to a name of Connection class that should be used to connect asynchronously to * primary database. It is used as hbase.client.async.connection.impl when creating connection to * primary database. Set to {@code default} to use default HBase connection class. + * + *

Required when using HBase 2.x. */ public static final String MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY = "google.bigtable.mirroring.primary-client.async.connection.impl"; @@ -55,6 +71,8 @@ public class MirroringConfigurationHelper { * Key to set to a name of Connection class that should be used to connect asynchronously to * secondary database. It is used as hbase.client.async.connection.impl when creating connection * to secondary database. Set to {@code default} to use default HBase connection class. + * + *

Required when using HBase 2.x. */ public static final String MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY = "google.bigtable.mirroring.secondary-client.async.connection.impl"; @@ -67,6 +85,8 @@ public class MirroringConfigurationHelper { * passed to each of connections, e.g. zookeeper url. * *

Prefixes should not contain dot at the end. + * + *

default: empty */ public static final String MIRRORING_PRIMARY_CONFIG_PREFIX_KEY = "google.bigtable.mirroring.primary-client.prefix"; @@ -74,26 +94,87 @@ public class MirroringConfigurationHelper { /** * If this key is set, then only parameters that start with given prefix are passed to secondary * Connection. + * + *

default: empty */ public static final String MIRRORING_SECONDARY_CONFIG_PREFIX_KEY = "google.bigtable.mirroring.secondary-client.prefix"; - public static final String MIRRORING_MISMATCH_DETECTOR_CLASS = - "google.bigtable.mirroring.mismatch-detector.impl"; + /** + * Path to {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector.Factory} of + * MismatchDetector. + * + *

default: {@link DefaultMismatchDetector.Factory}, logs detected mismatches to stdout and + * reports them as OpenCensus metrics. + */ + public static final String MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS = + "google.bigtable.mirroring.mismatch-detector.factory-impl"; + + /** + * Path to class to be used as FlowControllerStrategy. + * + *

default: {@link RequestCountingFlowControlStrategy}. + */ + public static final String MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS = + "google.bigtable.mirroring.flow-controller.factory-impl"; + + /** + * Maximal number of outstanding secondary database requests before throttling requests to primary + * database. + * + *

default: 500. + */ + public static final String MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_OUTSTANDING_REQUESTS = + "google.bigtable.mirroring.flow-controller-strategy.max-outstanding-requests"; + + /** + * Maximal number of bytes used by internal buffers for asynchronous requests before throttling + * requests to primary database. + * + *

default: 256MB. + */ + public static final String MIRRORING_FLOW_CONTROLLER_STRATEGY_MAX_USED_BYTES = + "google.bigtable.mirroring.flow-controller-strategy.max-used-bytes"; - public static final String MIRRORING_FLOW_CONTROLLER_STRATEGY_CLASS = - "google.bigtable.mirroring.flow-controller.impl"; + /** + * Integer value representing how many first bytes of binary values (such as row) should be + * converted to hex and then logged in case of error. + * + *

default: 32. + */ + public static final String MIRRORING_WRITE_ERROR_LOG_MAX_BINARY_VALUE_LENGTH = + "google.bigtable.mirroring.write-error-log.max-binary-value-bytes-logged"; - public static final String MIRRORING_FLOW_CONTROLLER_MAX_OUTSTANDING_REQUESTS = - "google.bigtable.mirroring.flow-controller.max-outstanding-requests"; + /** + * Path to class to be used as a {@link SecondaryWriteErrorConsumer.Factory} for consumer of + * secondary database write errors. + * + *

default: {@link DefaultSecondaryWriteErrorConsumer.Factory}, forwards errors to faillog + * using Appender and Serializer. + */ + public static final String MIRRORING_WRITE_ERROR_CONSUMER_FACTORY_CLASS = + "google.bigtable.mirroring.write-error-consumer.factory-impl"; - public static final String MIRRORING_WRITE_ERROR_CONSUMER_CLASS = - "google.bigtable.mirroring.write-error-consumer.impl"; + /** + * Faillog Appender {@link Appender.Factory} implementation. + * + *

default: {@link DefaultAppender.Factory}, writes data serialized by Serializer + * implementation to file on disk. + */ + public static final String MIRRORING_WRITE_ERROR_LOG_APPENDER_FACTORY_CLASS = + "google.bigtable.mirroring.write-error-log.appender.factory-impl"; - public static final String MIRRORING_WRITE_ERROR_LOG_APPENDER_CLASS = - "google.bigtable.mirroring.write-error-log.appender.impl"; - public static final String MIRRORING_WRITE_ERROR_LOG_SERIALIZER_CLASS = - "google.bigtable.mirroring.write-error-log.serializer.impl"; + /** + * Faillog {@link Serializer.Factory} implementation, responsible for serializing write errors + * reported by the Logger to binary representation, which is later appended to resulting file by + * the {@link Appender}. + * + *

default: {@link DefaultSerializer}, dumps supplied mutation along with error stacktrace as + * JSON. + */ + public static final String MIRRORING_WRITE_ERROR_LOG_SERIALIZER_FACTORY_CLASS = + "google.bigtable.mirroring.write-error-log.serializer.factory-impl"; /** * Integer value representing percentage of read operations performed on primary database that @@ -104,6 +185,8 @@ public class MirroringConfigurationHelper { * results. * *

Correct values are a integers ranging from 0 to 100 inclusive. + * + *

default: 100 */ public static final String MIRRORING_READ_VERIFICATION_RATE_PERCENT = "google.bigtable.mirroring.read-verification-rate-percent"; @@ -128,6 +211,103 @@ public class MirroringConfigurationHelper { public static final String MIRRORING_CONCURRENT_WRITES = "google.bigtable.mirroring.concurrent-writes"; + /** + * When set to {@code true} mirroring client will wait for operations to be performed on secondary + * database before returning to the user. In this mode exceptions thrown by mirroring operations + * reflect errors that happened on one of the databases. Types of thrown exceptions are not + * changed, but a {@link com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException} + * is added as a root cause for thrown exceptions (For more details see {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException}). + * + *

Defaults to {@code false}. + */ + public static final String MIRRORING_SYNCHRONOUS_WRITES = + "google.bigtable.mirroring.synchronous-writes"; + + /** + * Determines the path prefix used for generating the failed mutations log file names. + * + *

In default mode secondary mutations are executed asynchronously, so their status is not + * reported to the user. Instead, they are logged to a failed mutation log, which can be inspected + * manually, collected or read programatically to retry the mutations. + * + *

This property should not be empty. Example value: {@code + * "/tmp/hbase_mirroring_client_failed_mutations"}. + */ + public static final String MIRRORING_FAILLOG_PREFIX_PATH_KEY = + "google.bigtable.mirroring.write-error-log.appender.prefix-path"; + + /** + * Maximum size of the buffer holding failed mutations before they are logged to persistent + * storage. + * + *

Defaults to {@code 20 * 1024 * 1024}. + */ + public static final String MIRRORING_FAILLOG_MAX_BUFFER_SIZE_KEY = + "google.bigtable.mirroring.write-error-log.appender.max-buffer-size"; + + /** + * Controls the behavior of the failed mutation log on persistent storage not keeping up with + * writing the mutations. + * + *

If set to {@code true}, mutations will be dropped, otherwise they will block the thread + * until the storage catches up. + * + *

Defaults to {@code false}. + */ + public static final String MIRRORING_FAILLOG_DROP_ON_OVERFLOW_KEY = + "google.bigtable.mirroring.write-error-log.appender.drop-on-overflow"; + + /** + * Number of milliseconds that {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection} should wait synchronously for + * pending operations before terminating connection with secondary database. + * + *

If the timeout is reached, some of the operations on secondary database are still be + * in-flight and would be lost if we closed the secondary connection immediately. Those requests + * are not cancelled and will be performed asynchronously until the program terminates. + * + *

Defaults to 60000. + */ + public static final String MIRRORING_CONNECTION_CONNECTION_TERMINATION_TIMEOUT = + "google.bigtable.mirroring.connection.termination-timeout"; + + /** + * Number of previous unmatched results that {@link MirroringResultScanner} should check before + * declaring scan's results erroneous. + * + *

If not set uses default value of 5. Matching to one of buffered results removes earlier + * entries from the buffer. + */ + public static final String MIRRORING_SCANNER_BUFFERED_MISMATCHED_READS = + "google.bigtable.mirroring.result-scanner.buffered-mismatched-reads"; + + /** + * Enables timestamping {@link org.apache.hadoop.hbase.client.Put}s without timestamp set based on + * client's host local time. Client-side timestamps assigned by {@link Table}s and {@link + * BufferedMutator}`s created by one {@link Connection} are always increasing, even if system + * clock is moved backwards, for example by NTP or manually by the user. + * + *

There are three possible modes of client-side timestamping: + * + *

    + *
  • disabled - leads to inconsistencies between mirrored databases because timestamps are + * assigned separately on databases' severs. + *
  • inplace - all mutations without timestamp are modified in place and have timestamps + * assigned when submitted to Table or BufferedMutator. If mutation objects are reused then + * COPY mode should be used. + *
  • copy - timestamps are added to `Put`s after copying them, increases CPU load, but + * mutations submitted to Tables and BufferedMutators can be reused. This mode, combined + * with synchronous writes, gives a guarantee that after HBase API call returns submitted + * mutation objects are no longer used and can be safely modified by the user and submitted + * again. + *
+ * + *

Default value: inplace. + */ + public static final String MIRRORING_ENABLE_DEFAULT_CLIENT_SIDE_TIMESTAMPS = + "google.bigtable.mirroring.enable-default-client-side-timestamps"; + public static void fillConnectionConfigWithClassImplementation( Configuration connectionConfig, Configuration config, @@ -167,7 +347,9 @@ public static void checkParameters( } else { throw new IllegalArgumentException( String.format( - "Values of %s and %s should be different.", + "Values of %s and %s should be different. Prefixes are used to differentiate " + + "between primary and secondary configurations. If you want to use the same " + + "configuration for both databases then you shouldn't use prefixes at all.", MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, MIRRORING_SECONDARY_CONFIG_PREFIX_KEY)); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/OperationUtils.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/OperationUtils.java index 0c9282680e..6fa6aa0e8b 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/OperationUtils.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/OperationUtils.java @@ -15,10 +15,18 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; import java.util.Map; import java.util.NavigableMap; +import java.util.Set; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.Increment; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.Row; public class OperationUtils { public static Put makePutFromResult(Result result) { @@ -38,4 +46,43 @@ public static Put makePutFromResult(Result result) { } return put; } + + public static Result emptyResult() { + return Result.create(new Cell[0]); + } + + public static class RewrittenIncrementAndAppendIndicesInfo { + /** In batch() when an input Row's isReturnResults is false an empty Result is returned. */ + public final List operations; + + private final Set unwantedResultIndices; + + public RewrittenIncrementAndAppendIndicesInfo(List inputOperations) { + this.unwantedResultIndices = new HashSet<>(); + this.operations = new ArrayList<>(inputOperations); + for (int i = 0; i < operations.size(); i++) { + Row row = operations.get(i); + if (row instanceof Increment) { + ((Increment) row).setReturnResults(true); + this.unwantedResultIndices.add(i); + } else if (row instanceof Append) { + ((Append) row).setReturnResults(true); + this.unwantedResultIndices.add(i); + } + } + } + + public void discardUnwantedResults(Object[] results) { + if (!this.unwantedResultIndices.isEmpty()) { + for (int i = 0; i < results.length; i++) { + if (results[i] instanceof Result && this.unwantedResultIndices.contains(i)) { + Row op = this.operations.get(i); + if (op instanceof Increment || op instanceof Append) { + results[i] = emptyResult(); + } + } + } + } + } + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/RequestScheduling.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/RequestScheduling.java index 18671ef47b..667aabab59 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/RequestScheduling.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/RequestScheduling.java @@ -38,85 +38,118 @@ public class RequestScheduling { private static final Logger Log = new Logger(RequestScheduling.class); - public static ListenableFuture scheduleRequestAndVerificationWithFlowControl( + /** + * This method starts supplied asynchronous operation after obtaining resource reservation from + * the flow controller and registers a callback to be run after the operation is finished. If the + * flow controller rejects the resource reservation or waiting for the reservation is interrupted, + * the operation is not started and user-provided {@code flowControlReservationErrorConsumer} is + * invoked. + * + * @param requestResourcesDescription Description of resources that should be reserved from the + * flow controller. + * @param operation Asynchronous operation to start after obtaining resources. + * @param callback Callback to be called after {@code operation} completes. + * @param flowController Flow controller that should reserve the resources. + * @param mirroringTracer Tracer used for tracing flow control and callback operations. + * @param flowControlReservationErrorConsumer Handler that should be called if obtaining the + * reservation from the flow controller fails. + * @return Future that will be set when the operation and callback scheduled by this operation + * finish running. The future will also be set if the flow controller rejects reservation + * request. + */ + public static ListenableFuture scheduleRequestWithCallback( final RequestResourcesDescription requestResourcesDescription, - final Supplier> secondaryResultFutureSupplier, - final FutureCallback verificationCallback, - final FlowController flowController, - final MirroringTracer mirroringTracer) { - return scheduleRequestAndVerificationWithFlowControl( - requestResourcesDescription, - secondaryResultFutureSupplier, - verificationCallback, - flowController, - mirroringTracer, - new Function() { - @Override - public Void apply(Throwable t) { - return null; - } - }); - } - - // TODO: remove Verification from the name because it is confusing when used with writes. - public static ListenableFuture scheduleRequestAndVerificationWithFlowControl( - final RequestResourcesDescription requestResourcesDescription, - final Supplier> invokeOperation, - final FutureCallback verificationCallback, + final Supplier> operation, + final FutureCallback callback, final FlowController flowController, final MirroringTracer mirroringTracer, final Function flowControlReservationErrorConsumer) { - final SettableFuture verificationCompletedFuture = SettableFuture.create(); + final SettableFuture callbackCompletedFuture = SettableFuture.create(); - // TODO: no option to drop requests in flow controller is full. final ListenableFuture reservationRequest = flowController.asyncRequestResource(requestResourcesDescription); - final ResourceReservation reservation; + + final ResourceReservation reservation = + waitForReservation( + reservationRequest, flowControlReservationErrorConsumer, mirroringTracer); + + if (reservation == null) { + callbackCompletedFuture.set(null); + return callbackCompletedFuture; + } + + // Creates a callback that will release the reservation and set `callbackCompletedFuture` after + // callback is finished. + FutureCallback wrappedCallback = + wrapCallbackWithReleasingReservationAndCompletingFuture( + callback, reservation, callbackCompletedFuture, mirroringTracer); + + // Start the asynchronous operation. + ListenableFuture operationsResult = operation.get(); + + Futures.addCallback(operationsResult, wrappedCallback, MoreExecutors.directExecutor()); + + return callbackCompletedFuture; + } + + private static + FutureCallback wrapCallbackWithReleasingReservationAndCompletingFuture( + final FutureCallback callback, + final ResourceReservation reservation, + final SettableFuture verificationCompletedFuture, + MirroringTracer mirroringTracer) { + return mirroringTracer.spanFactory.wrapWithCurrentSpan( + new FutureCallback() { + @Override + public void onSuccess(@NullableDecl T t) { + try { + Log.trace("starting verification %s", t); + callback.onSuccess(t); + Log.trace("verification done %s", t); + } finally { + reservation.release(); + verificationCompletedFuture.set(null); + } + } + + @Override + public void onFailure(Throwable throwable) { + try { + callback.onFailure(throwable); + } finally { + reservation.release(); + verificationCompletedFuture.set(null); + } + } + }); + } + + /** + * Waits until reservation is obtained, rejected or interrupted. + * + * @return Obtained {@link ResourceReservation} or {@code null} in case of rejection or + * interruption. + */ + private static ResourceReservation waitForReservation( + ListenableFuture reservationRequest, + Function flowControlReservationErrorConsumer, + MirroringTracer mirroringTracer) { try { try (Scope scope = mirroringTracer.spanFactory.flowControlScope()) { - reservation = reservationRequest.get(); + return reservationRequest.get(); } } catch (InterruptedException | ExecutionException e) { - flowControlReservationErrorConsumer.apply(e); + if (e instanceof InterruptedException) { + flowControlReservationErrorConsumer.apply(e); + } else { + flowControlReservationErrorConsumer.apply(e.getCause()); + } FlowController.cancelRequest(reservationRequest); - verificationCompletedFuture.set(null); - if (e instanceof InterruptedException) { Thread.currentThread().interrupt(); } - return verificationCompletedFuture; + return null; } - - // TODO: what happens to reference count when the thread dies while performing the verification? - Futures.addCallback( - invokeOperation.get(), - mirroringTracer.spanFactory.wrapWithCurrentSpan( - new FutureCallback() { - @Override - public void onSuccess(@NullableDecl T t) { - try { - Log.trace("starting verification %s", t); - verificationCallback.onSuccess(t); - Log.trace("verification done %s", t); - } finally { - reservation.release(); - verificationCompletedFuture.set(null); - } - } - - @Override - public void onFailure(Throwable throwable) { - try { - verificationCallback.onFailure(throwable); - } finally { - reservation.release(); - verificationCompletedFuture.set(null); - } - } - }), - MoreExecutors.directExecutor()); - - return verificationCompletedFuture; } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumer.java index cc9b2a3eeb..19e245b66b 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumer.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumer.java @@ -16,13 +16,26 @@ package com.google.cloud.bigtable.mirroring.hbase1_x.utils; import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.FailedMutationLogger; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import java.util.List; import org.apache.hadoop.hbase.client.Row; +/** + * Implementations of this interface consume mutations ({@link + * org.apache.hadoop.hbase.client.Mutation} and {@link org.apache.hadoop.hbase.client.RowMutations}) + * that succeeded on primary database but have failed on the secondary. + * + *

Default implementation ({@link DefaultSecondaryWriteErrorConsumer}) forwards those writes to + * {@link FailedMutationLogger} (which, by default, writes them to on-disk log). + */ @InternalApi("For internal usage only") public interface SecondaryWriteErrorConsumer { void consume(HBaseOperation operation, Row row, Throwable cause); void consume(HBaseOperation operation, List operations, Throwable cause); + + interface Factory { + SecondaryWriteErrorConsumer create(FailedMutationLogger failedMutationLogger) throws Throwable; + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumerWithMetrics.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumerWithMetrics.java index 398d6b1cbf..1d583cbeba 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumerWithMetrics.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/SecondaryWriteErrorConsumerWithMetrics.java @@ -32,13 +32,13 @@ public SecondaryWriteErrorConsumerWithMetrics( @Override public void consume(HBaseOperation operation, List operations, Throwable cause) { - this.mirroringTracer.metricsRecorder.recordWriteMismatches(operation, operations.size()); + this.mirroringTracer.metricsRecorder.recordSecondaryWriteErrors(operation, operations.size()); this.secondaryWriteErrorConsumer.consume(operation, operations, cause); } @Override public void consume(HBaseOperation operation, Row row, Throwable cause) { - this.mirroringTracer.metricsRecorder.recordWriteMismatches(operation, 1); + this.mirroringTracer.metricsRecorder.recordSecondaryWriteErrors(operation, 1); this.secondaryWriteErrorConsumer.consume(operation, row, cause); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompat.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompat.java new file mode 100644 index 0000000000..1be3691937 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompat.java @@ -0,0 +1,22 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.compat; + +import org.apache.hadoop.hbase.Cell; + +public interface CellComparatorCompat { + int compareCells(Cell cell1, Cell cell2); +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/SlowMismatchDetector.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompatImpl.java similarity index 52% rename from bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/SlowMismatchDetector.java rename to bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompatImpl.java index 4008979e1b..8cfdcb01e1 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/SlowMismatchDetector.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/compat/CellComparatorCompatImpl.java @@ -1,5 +1,5 @@ /* - * Copyright 2015 Google LLC + * Copyright 2021 Google LLC * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. @@ -13,24 +13,15 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package com.google.cloud.bigtable.hbase.mirroring.utils; +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.compat; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellComparator; -public class SlowMismatchDetector extends TestMismatchDetector { - public static int sleepTime = 1000; - - public SlowMismatchDetector(MirroringTracer tracer) { - super(tracer); - } +public class CellComparatorCompatImpl implements CellComparatorCompat { @Override - public void onVerificationStarted() { - super.onVerificationStarted(); - try { - Thread.sleep(sleepTime); - } catch (InterruptedException ignored) { - - } + public int compareCells(Cell cell1, Cell cell2) { + return CellComparator.compare(cell1, cell2, true); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Appender.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Appender.java index 3724bf6482..d38bdf6aff 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Appender.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Appender.java @@ -15,6 +15,8 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOptions; + /** * Objects of this class should write log entries somewhere. * @@ -22,4 +24,8 @@ */ public interface Appender extends AutoCloseable { void append(byte[] data) throws InterruptedException; + + interface Factory { + Appender create(MirroringOptions.Faillog options) throws Throwable; + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppender.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppender.java index 69cbba76c5..c62fc546df 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppender.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppender.java @@ -15,10 +15,12 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FAILLOG_PREFIX_PATH_KEY; import static java.nio.file.StandardOpenOption.CREATE_NEW; import static java.nio.file.StandardOpenOption.SYNC; import static java.nio.file.StandardOpenOption.WRITE; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOptions; import com.google.common.base.Preconditions; import java.io.BufferedOutputStream; import java.io.IOException; @@ -31,7 +33,6 @@ import java.util.TimeZone; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; /** * Write log entries asynchronously. @@ -46,22 +47,12 @@ public class DefaultAppender implements Appender { private final LogBuffer buffer; private final Writer writer; - public static final String PREFIX_PATH_KEY = - "google.bigtable.mirroring.write-error-log.appender.prefix-path"; - public static final String MAX_BUFFER_SIZE_KEY = - "google.bigtable.mirroring.write-error-log.appender.max-buffer-size"; - public static final String DROP_ON_OVERFLOW_KEY = - "google.bigtable.mirroring.write-error-log.appender.drop-on-overflow"; - - public DefaultAppender(Configuration configuration) throws IOException { - this( - getPrefixPathFromConfiguration(configuration), - getMaxBufferSizeFromConfiguration(configuration), - getDropOnOverflowFromConfiguration(configuration)); + public DefaultAppender(MirroringOptions.Faillog options) throws IOException { + this(options.prefixPath, options.maxBufferSize, options.dropOnOverflow); } /** - * Create an `DefaultAppender`. + * Create a `DefaultAppender`. * *

The created `DefaultAppender` will create a thread for flushing the data asynchronously. The * data will be transferred to the thread via a buffer of a given maximum size (`maxBufferSize`). @@ -79,6 +70,12 @@ public DefaultAppender(Configuration configuration) throws IOException { */ public DefaultAppender(String pathPrefix, int maxBufferSize, boolean dropOnOverflow) throws IOException { + + Preconditions.checkArgument( + pathPrefix != null && !pathPrefix.isEmpty(), + "DefaultAppender's %s key shouldn't be empty.", + MIRRORING_FAILLOG_PREFIX_PATH_KEY); + // In case of an unclean shutdown the end of the log file may contain partially written log // entries. In order to simplify reading the log files, we assume that everything following an // incomplete entry is to be discarded. In order to satisfy that assumption, we should not @@ -180,40 +177,10 @@ public void run() { } } - private static boolean getDropOnOverflowFromConfiguration(Configuration configuration) { - String dropOnOverflow = configuration.get(DROP_ON_OVERFLOW_KEY, "false"); - Preconditions.checkArgument( - dropOnOverflow != null && !dropOnOverflow.isEmpty(), - "DefaultAppender's %s key shouldn't be empty.", - DROP_ON_OVERFLOW_KEY); - try { - return Boolean.parseBoolean(dropOnOverflow); - } catch (NumberFormatException e) { - throw new IllegalArgumentException( - String.format("DefaultAppender's %s key should be a boolean.", DROP_ON_OVERFLOW_KEY)); - } - } - - private static int getMaxBufferSizeFromConfiguration(Configuration configuration) { - String maxBufferSize = configuration.get(MAX_BUFFER_SIZE_KEY, "20971520"); - Preconditions.checkArgument( - maxBufferSize != null && !maxBufferSize.isEmpty(), - "DefaultAppender's %s key shouldn't be empty.", - MAX_BUFFER_SIZE_KEY); - try { - return Integer.parseInt(maxBufferSize); - } catch (NumberFormatException e) { - throw new IllegalArgumentException( - String.format("DefaultAppender's %s key should be a integer.", MAX_BUFFER_SIZE_KEY)); + public static class Factory implements Appender.Factory { + @Override + public Appender create(MirroringOptions.Faillog options) throws IOException { + return new DefaultAppender(options); } } - - private static String getPrefixPathFromConfiguration(Configuration configuration) { - String prefixPath = configuration.get(PREFIX_PATH_KEY); - Preconditions.checkArgument( - prefixPath != null && !prefixPath.isEmpty(), - "DefaultAppender's %s key shouldn't be empty.", - PREFIX_PATH_KEY); - return prefixPath; - } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultSerializer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultSerializer.java index 46fe5d7bf9..a921fa78ae 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultSerializer.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultSerializer.java @@ -116,4 +116,11 @@ public enum OperationType { } } } + + public static class Factory implements Serializer.Factory { + @Override + public Serializer create() { + return new DefaultSerializer(); + } + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Logger.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/FailedMutationLogger.java similarity index 88% rename from bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Logger.java rename to bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/FailedMutationLogger.java index 94aed2d95c..44977b8f71 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Logger.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/FailedMutationLogger.java @@ -23,11 +23,11 @@ * *

Objects of this class enable persisting failed mutations. */ -public class Logger implements AutoCloseable { +public class FailedMutationLogger implements AutoCloseable { private final Serializer serializer; private final Appender appender; - Logger() throws IOException { + FailedMutationLogger() throws IOException { this("/tmp/hbase_mirroring_client_failed_mutations", 1024 * 1024, false); } @@ -46,7 +46,8 @@ public class Logger implements AutoCloseable { * thread attempting to write until there some data is flushed to disk * @throws IOException on failure to write the log */ - Logger(String pathPrefix, int maxBufferSize, boolean dropOnOverFlow) throws IOException { + FailedMutationLogger(String pathPrefix, int maxBufferSize, boolean dropOnOverFlow) + throws IOException { this(new DefaultAppender(pathPrefix, maxBufferSize, dropOnOverFlow), new DefaultSerializer()); } @@ -57,7 +58,7 @@ public class Logger implements AutoCloseable { * @param appender an object responsible for storing log entries * @param serializer on object responsible for transforming failed mutations into log entries */ - public Logger(Appender appender, Serializer serializer) { + public FailedMutationLogger(Appender appender, Serializer serializer) { this.appender = appender; this.serializer = serializer; } @@ -73,8 +74,7 @@ public Logger(Appender appender, Serializer serializer) { */ public void mutationFailed(Mutation mutation, Throwable failureCause) throws InterruptedException { - byte[] serializedEntry = serializer.serialize(mutation, failureCause); - appender.append(serializedEntry); + appender.append(serializer.serialize(mutation, failureCause)); } @Override diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBuffer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBuffer.java index 8c7a36858d..077bc77815 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBuffer.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBuffer.java @@ -15,6 +15,7 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog; +import com.google.common.base.Preconditions; import java.io.Closeable; import java.util.ArrayDeque; import java.util.Queue; @@ -83,7 +84,7 @@ private boolean admitLocked(byte[] data) { return false; } - private void waitForAdmissionLocked(byte[] data) throws InterruptedException { + private void waitForSpaceAndAdmitLocked(byte[] data) throws InterruptedException { while (!admitLocked(data) && !shutdown) { notFull.await(); } @@ -121,7 +122,7 @@ public boolean append(byte[] data) throws InterruptedException { return false; } } else { - waitForAdmissionLocked(data); + waitForSpaceAndAdmitLocked(data); if (shutdown) { throwOnMisuseLocked("LogBuffer closed while waiting for log admission"); } @@ -151,7 +152,7 @@ public Queue drain() throws InterruptedException { notEmpty.await(); } if (buffers.isEmpty()) { - assert shutdown; + Preconditions.checkState(shutdown); // We've been instructed to shut down and have already been drained from any buffers. return null; } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/README.md b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/README.md index 0f2eb667d5..0e42d516ce 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/README.md +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/README.md @@ -29,7 +29,6 @@ mutations. * failed mutation * operation specific context (i.e. condition of the conditional mutation or index in the bulk mutation or read-modify-write proto) - * checksum (to ensure integrity in case of a power outage) * We log JSON - one message per one line; that way all central logging solutions will be able to digest such data in a meaningful way * We write our own, custom logging mechanism diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Serializer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Serializer.java index bd41727383..752c2bff54 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Serializer.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/Serializer.java @@ -32,4 +32,8 @@ public interface Serializer { * @return data representing the relevant log entry */ byte[] serialize(Mutation mutation, Throwable failureCause); + + interface Factory { + Serializer create() throws Throwable; + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowControlStrategy.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowControlStrategy.java index 1a930030b9..e6f547b6a1 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowControlStrategy.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowControlStrategy.java @@ -16,12 +16,15 @@ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol; import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOptions; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; import com.google.common.util.concurrent.ListenableFuture; /** * Interface used by {@link FlowController} to decide whether resources needed for performing a * secondary database request can be acquired. + * + *

Implementations of this class should be thread-safe. */ @InternalApi("For internal usage only") public interface FlowControlStrategy { @@ -41,4 +44,8 @@ ListenableFuture asyncRequestResourceReservation( /** Releases resources associated with provided description. */ void releaseResource(RequestResourcesDescription resource); + + interface Factory { + FlowControlStrategy create(MirroringOptions options) throws Throwable; + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowController.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowController.java index f9cdbd3bae..e7c2c6997f 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowController.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/FlowController.java @@ -16,21 +16,47 @@ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol; import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.common.base.Function; +import com.google.common.base.Preconditions; +import com.google.common.base.Supplier; +import com.google.common.util.concurrent.FutureCallback; import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.SettableFuture; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; /** - * FlowController limits the number of concurrently performed requests to the secondary database. - * Call to {@link #asyncRequestResource(RequestResourcesDescription)} returns a future that will be - * completed when {@link FlowControlStrategy} decides that it can be allowed to perform the - * requests. The future might also be completed exceptionally if the resource was not allowed to - * obtain the resources. + * FlowController limits resources (RAM, number of requests) used by asynchronous requests to + * secondary database. It is used to keep track of all requests sent to secondary from {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable} and {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutator}, most of + * the times called from a helper method {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.utils.RequestScheduling#scheduleRequestWithCallback( + * RequestResourcesDescription, Supplier, FutureCallback, FlowController, MirroringTracer, + * Function)}. FlowController and {@link FlowControlStrategy} do not allocate any actual resources, + * they are used for accounting the amount of resources used by other classes, thus we say that they + * "reserve" resources rather than allocate them. * - *

Order of allowing requests in determined by {@link FlowControlStrategy}. + *

Call to {@link #asyncRequestResource(RequestResourcesDescription)} returns a future that will + * be completed when {@link FlowControlStrategy} reserves requested amount of resources and the + * requesting actor is allowed perform its operation. {@link ResourceReservation}s obtained this way + * should be released using {@link ResourceReservation#release()} after the operation is completed. + * The future might also be completed exceptionally if the {@link FlowControlStrategy} rejects a + * request and resources for it won't be reserved. The future returned from {@link + * #asyncRequestResource(RequestResourcesDescription)} can be cancelled (using {@link + * #cancelRequest(Future)}) if the requesting actor is not longer willing to perform the request. + * Each request should be released or have its future cancelled. Futures can be safely cancelled + * even if they have been already completed - that would simply release reserved resources. * - *

Thread-safe. + *

Requests are completed in order determined by {@link FlowControlStrategy}. + * + *

{@link FlowController} and {@link AcquiredResourceReservation} provide a simpler interface + * over {@link FlowControlStrategy} - a {@link ResourceReservation#release()} called on {@link + * ResourceReservation} obtained from an instance of FlowController will always release appropriate + * amount of resources from correct {@link FlowControlStrategy}. + * + *

Thread-safe because uses thread-safe interface of {@link FlowControlStrategy}. */ @InternalApi("For internal usage only") public class FlowController { @@ -49,14 +75,21 @@ public static void cancelRequest(Future resourceReservation // The cancellation may fail if the resources were already allocated by the FlowController, then // we should free them, or when the reservation was rejected, which we should ignore. if (!resourceReservationFuture.cancel(true)) { + // We cannot cancel the reservation future. This means that the future was already completed + // by calling `set()` or `setException()`. try { resourceReservationFuture.get().release(); } catch (InterruptedException ex) { - // If we couldn't cancel the request, it must have already been set, we assume - // that we will get the reservation without problems - assert false; + // This shouldn't happen. The future was already set with `set()` or `setException()`, which + // means that calling `.get()` on it shouldn't block. + throw new IllegalStateException( + "A reservation future couldn't be cancelled, but obtaining its result has thrown " + + "InterruptedException. This is unexpected.", + ex); } catch (ExecutionException ex) { - // The request was rejected. + // The request was rejected by flow controller (e.g. cancelled). + // `AcquiredResourceReservation` handles such cases correctly and will release associated + // resources. } } } @@ -74,6 +107,8 @@ public interface ResourceReservation { * Default implementation of {@link ResourceReservation} that can be used by {@link * FlowControlStrategy} implementations as an entry to be notified when resources for request are * available. + * + *

Not thread-safe. */ public static class AcquiredResourceReservation implements ResourceReservation { final RequestResourcesDescription requestResourcesDescription; @@ -92,11 +127,11 @@ public AcquiredResourceReservation( this.notified = false; } - void notifyWaiter() { - assert !this.notified; + public void notifyWaiter() { + Preconditions.checkState(!this.notified); this.notified = true; if (!this.notification.set(this)) { - assert this.notification.isCancelled(); + Preconditions.checkState(this.notification.isCancelled()); // The notification was cancelled, we should release its resources. this.release(); } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestCountingFlowControlStrategy.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestCountingFlowControlStrategy.java index 4861ca3e2d..05317e1bf9 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestCountingFlowControlStrategy.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestCountingFlowControlStrategy.java @@ -26,13 +26,13 @@ * counter by more than one, for example when calling {@link Table#get(List)}, the number of * elements in list is counted. * - *

If the number of scheduled entries reaches {@link Ledger#minDifferenceToBlock} then {@link - * #tryAcquireResource(RequestResourcesDescription)} will return false and reservation request from - * {@link FlowController#asyncRequestResource(RequestResourcesDescription)} won't be resolved - * immediately. + *

If the number of scheduled entries reaches {@link Ledger#outstandingRequestsThreshold} then + * {@link #tryAcquireResource(RequestResourcesDescription)} will return false and reservation + * request from {@link FlowController#asyncRequestResource(RequestResourcesDescription)} won't be + * resolved immediately. * - *

Requests that want to acquire more tickets than {@link Ledger#minDifferenceToBlock} are - * allowed to perform their actions only if all other resources were released. Along with + *

Requests that want to acquire more tickets than {@link Ledger#outstandingRequestsThreshold} + * are allowed to perform their actions only if all other resources were released. Along with * FlowController's guarantees of waking requests in order of arrival it guarantees that an * over-sized request will be the only running request, without any other running concurrently. It * also means that: @@ -46,43 +46,64 @@ * *

For those reasons such requests can greatly reduce concurrency and the limit should be chosen * with care. - * - *

Not thread-safe. */ @InternalApi("For internal usage only") public class RequestCountingFlowControlStrategy extends SingleQueueFlowControlStrategy { - public RequestCountingFlowControlStrategy(int minDifferenceToBlock) { - super(new Ledger(minDifferenceToBlock)); + public RequestCountingFlowControlStrategy( + int outstandingRequestsThreshold, int usedBytesThreshold) { + super(new Ledger(outstandingRequestsThreshold, usedBytesThreshold)); } public RequestCountingFlowControlStrategy(MirroringOptions options) { - this(options.flowControllerMaxOutstandingRequests); + this(options.flowControllerMaxOutstandingRequests, options.flowControllerMaxUsedBytes); } + /** Not thread-safe, access is synchronized by {@link SingleQueueFlowControlStrategy}. */ private static class Ledger implements SingleQueueFlowControlStrategy.Ledger { - private int minDifferenceToBlock; + private final int usedBytesThreshold; + private final int outstandingRequestsThreshold; + private int primaryReadsAdvantage; // = completedPrimaryReads - completedSecondaryReads + private int usedBytes; - private Ledger(int minDifferenceToBlock) { - this.minDifferenceToBlock = minDifferenceToBlock; + private Ledger(int outstandingRequestsThreshold, int usedBytesThreshold) { + this.outstandingRequestsThreshold = outstandingRequestsThreshold; + this.usedBytesThreshold = usedBytesThreshold; this.primaryReadsAdvantage = 0; + this.usedBytes = 0; } - @Override public boolean canAcquireResource(RequestResourcesDescription requestResourcesDescription) { int neededEntries = requestResourcesDescription.numberOfResults; - return this.primaryReadsAdvantage == 0 - || this.primaryReadsAdvantage + neededEntries <= this.minDifferenceToBlock; + if (this.primaryReadsAdvantage == 0) { + // Always allow at least one request into the flow controller, regardless of its size. + return true; + } + return this.primaryReadsAdvantage + neededEntries <= this.outstandingRequestsThreshold + && this.usedBytes + requestResourcesDescription.sizeInBytes <= this.usedBytesThreshold; } @Override - public void accountAcquiredResource(RequestResourcesDescription requestResourcesDescription) { - this.primaryReadsAdvantage += requestResourcesDescription.numberOfResults; + public boolean tryAcquireResource(RequestResourcesDescription requestResourcesDescription) { + if (this.canAcquireResource(requestResourcesDescription)) { + this.primaryReadsAdvantage += requestResourcesDescription.numberOfResults; + this.usedBytes += requestResourcesDescription.sizeInBytes; + return true; + } + return false; } @Override public void accountReleasedResources(RequestResourcesDescription requestResourcesDescription) { this.primaryReadsAdvantage -= requestResourcesDescription.numberOfResults; + this.usedBytes -= requestResourcesDescription.sizeInBytes; + } + } + + public static class Factory implements FlowControlStrategy.Factory { + @Override + public FlowControlStrategy create(MirroringOptions options) { + return new RequestCountingFlowControlStrategy(options); } } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestResourcesDescription.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestResourcesDescription.java index a38fde59aa..37baceba30 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestResourcesDescription.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/RequestResourcesDescription.java @@ -202,7 +202,8 @@ private static long calculateSize(List operation) { } else if (row instanceof Get) { totalSize += calculateSize((Get) row); } else { - assert false; + throw new IllegalArgumentException( + String.format("calculateSize expected Mutation, RowMutations or Get, not %s.", row)); } } return totalSize; diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/SingleQueueFlowControlStrategy.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/SingleQueueFlowControlStrategy.java index 6a8e999dc4..48ea01a34a 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/SingleQueueFlowControlStrategy.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/SingleQueueFlowControlStrategy.java @@ -18,6 +18,7 @@ import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.AcquiredResourceReservation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; +import com.google.common.annotations.VisibleForTesting; import com.google.common.util.concurrent.ListenableFuture; import java.util.ArrayDeque; import java.util.ArrayList; @@ -27,12 +28,15 @@ /** * A {@link FlowControlStrategy} that keeps a queue of requests and admits then in order of * appearance. + * + *

Thread-safe. */ @InternalApi("For internal usage only") public class SingleQueueFlowControlStrategy implements FlowControlStrategy { // Used to prevent starving big requests by a lot of smaller ones. private final Queue waitingRequestsQueue = new ArrayDeque<>(); - // Counts in-flight requests and decides if new requests can be allowed. + // Counts resources used by in-flight requests and decides if resources for new requests can be + // reserved. Assumed to be non-thread safe and accessed with synchronized(this.ledger). private final Ledger ledger; protected SingleQueueFlowControlStrategy(Ledger ledger) { @@ -47,12 +51,13 @@ public ListenableFuture asyncRequestResourceReservation( // We shouldn't complete futures with the lock held, so we use this list to gather those which // should be completed once we release the lock - List resourcesToBeNotified; - synchronized (this) { + List reservationsWithAllocatedResources; + synchronized (this.ledger) { this.waitingRequestsQueue.add(resources); - resourcesToBeNotified = this.allowWaiters(); + // Try to allocate resources for new reservation. + reservationsWithAllocatedResources = this.tryToAllocateResourcesForNextReservations(); } - notifyWaiters(resourcesToBeNotified); + notifyReservations(reservationsWithAllocatedResources); return resources.notification; } @@ -60,45 +65,46 @@ public ListenableFuture asyncRequestResourceReservation( @Override public final void releaseResource(RequestResourcesDescription resource) { // We shouldn't complete futures with the lock held, so we use this list to gather those which - // should be completed once we release the lock - List resourcesToBeNotified; - synchronized (this) { + // should be completed once we release the lock. + List reservatiosWithAllocatedResources; + synchronized (this.ledger) { this.ledger.accountReleasedResources(resource); - resourcesToBeNotified = this.allowWaiters(); + // After the resource was released we should try to allocate resources for more reservations. + reservatiosWithAllocatedResources = this.tryToAllocateResourcesForNextReservations(); } - notifyWaiters(resourcesToBeNotified); + notifyReservations(reservatiosWithAllocatedResources); } - private synchronized List allowWaiters() { - List resourcesToBeNotified = new ArrayList<>(); + private List tryToAllocateResourcesForNextReservations() { + synchronized (this.ledger) { + List resourcesToBeNotified = new ArrayList<>(); - while (!this.waitingRequestsQueue.isEmpty() - && this.tryAcquireResource(this.waitingRequestsQueue.peek().requestResourcesDescription)) { - AcquiredResourceReservation reservation = this.waitingRequestsQueue.remove(); - resourcesToBeNotified.add(reservation); - } + while (!this.waitingRequestsQueue.isEmpty() + && this.tryAcquireResource( + this.waitingRequestsQueue.peek().requestResourcesDescription)) { + AcquiredResourceReservation reservation = this.waitingRequestsQueue.remove(); + resourcesToBeNotified.add(reservation); + } - return resourcesToBeNotified; + return resourcesToBeNotified; + } } - private static void notifyWaiters(List resourcesToBeNotified) { + private static void notifyReservations(List resourcesToBeNotified) { for (AcquiredResourceReservation reservation : resourcesToBeNotified) { reservation.notifyWaiter(); } } - public boolean tryAcquireResource(RequestResourcesDescription requestResourcesDescription) { - boolean canAcquire = this.ledger.canAcquireResource(requestResourcesDescription); - if (canAcquire) { - this.ledger.accountAcquiredResource(requestResourcesDescription); + @VisibleForTesting + boolean tryAcquireResource(RequestResourcesDescription requestResourcesDescription) { + synchronized (this.ledger) { + return this.ledger.tryAcquireResource(requestResourcesDescription); } - return canAcquire; } interface Ledger { - boolean canAcquireResource(RequestResourcesDescription requestResourcesDescription); - - void accountAcquiredResource(RequestResourcesDescription requestResourcesDescription); + boolean tryAcquireResource(RequestResourcesDescription requestResourcesDescription); void accountReleasedResources(RequestResourcesDescription resource); } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/WriteOperationInfo.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/WriteOperationInfo.java new file mode 100644 index 0000000000..303f81a41c --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/WriteOperationInfo.java @@ -0,0 +1,51 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol; + +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import java.util.Collections; +import java.util.List; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; + +public class WriteOperationInfo { + public final RequestResourcesDescription requestResourcesDescription; + public final List operations; + public final HBaseOperation hBaseOperation; + + public WriteOperationInfo(Put operation) { + this(new RequestResourcesDescription(operation), operation, HBaseOperation.PUT); + } + + public WriteOperationInfo(Delete operation) { + this(new RequestResourcesDescription(operation), operation, HBaseOperation.DELETE); + } + + public WriteOperationInfo(RowMutations operation) { + this(new RequestResourcesDescription(operation), operation, HBaseOperation.MUTATE_ROW); + } + + private WriteOperationInfo( + RequestResourcesDescription requestResourcesDescription, + Row operation, + HBaseOperation hBaseOperation) { + this.requestResourcesDescription = requestResourcesDescription; + this.operations = Collections.singletonList(operation); + this.hBaseOperation = hBaseOperation; + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsRecorder.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsRecorder.java index 9f70d59743..4e59116474 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsRecorder.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsRecorder.java @@ -16,8 +16,9 @@ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.OPERATION_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.READ_MATCHES; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.READ_MISMATCHES; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.WRITE_MISMATCHES; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_WRITE_ERRORS; import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; @@ -28,6 +29,13 @@ import io.opencensus.tags.TagContextBuilder; import io.opencensus.tags.Tagger; +/** + * Used to record metrics related to operations (by {@link MirroringSpanFactory}) and to record read + * mismatches and secondary write errors (in these cases accessed from {@link + * com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection}'s {@link MirroringTracer}). + * + *

Created by {@link MirroringTracer}. + */ @InternalApi("For internal usage only") public class MirroringMetricsRecorder { private final Tagger tagger; @@ -48,8 +56,8 @@ public void recordOperation( MeasureMap map = statsRecorder.newMeasureMap(); map.put(latencyMeasure, latencyMs); - if (failed) { - map.put(errorMeasure, 1); + if (errorMeasure != null) { + map.put(errorMeasure, failed ? 1 : 0); } map.record(tagContext); } @@ -72,10 +80,24 @@ public void recordReadMismatches(HBaseOperation operation, int numberOfMismatche map.record(tagContext); } - public void recordWriteMismatches(HBaseOperation operation, int numberOfMismatches) { + public void recordSecondaryWriteErrors(HBaseOperation operation, int numberOfErrors) { + TagContext tagContext = getTagContext(operation); + MeasureMap map = statsRecorder.newMeasureMap(); + map.put(SECONDARY_WRITE_ERRORS, numberOfErrors); + map.record(tagContext); + } + + public void recordReadMatches(HBaseOperation operation, int numberOfMatches) { TagContext tagContext = getTagContext(operation); MeasureMap map = statsRecorder.newMeasureMap(); - map.put(WRITE_MISMATCHES, numberOfMismatches); + map.put(READ_MATCHES, numberOfMatches); + map.record(tagContext); + } + + public void recordLatency(MeasureLong latencyMeasure, long latencyMs) { + TagContext tagContext = tagger.emptyBuilder().build(); + MeasureMap map = statsRecorder.newMeasureMap(); + map.put(latencyMeasure, latencyMs); map.record(tagContext); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsViews.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsViews.java index 87b06d15e2..80903f5514 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsViews.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringMetricsViews.java @@ -15,14 +15,17 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.FLOW_CONTROL_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.MIRRORING_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.OPERATION_KEY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.PRIMARY_ERRORS; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.PRIMARY_LATENCY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.READ_MATCHES; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.READ_MISMATCHES; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_ERRORS; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_LATENCY; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.WRITE_MISMATCHES; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_WRITE_ERRORS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_WRITE_ERROR_HANDLER_LATENCY; import com.google.api.core.InternalApi; import com.google.common.collect.ImmutableList; @@ -35,6 +38,8 @@ import io.opencensus.stats.Stats; import io.opencensus.stats.View; import io.opencensus.stats.ViewManager; +import io.opencensus.tags.TagKey; +import java.util.ArrayList; @InternalApi("For internal usage only") public class MirroringMetricsViews { @@ -107,15 +112,46 @@ public class MirroringMetricsViews { SUM, ImmutableList.of(OPERATION_KEY)); - /** {@link View} for Mirroring client's secondary write mismatches. */ - private static final View WRITE_MISMATCH_VIEW = + /** {@link View} for Mirroring client's secondary database write errors. */ + private static final View SECONDARY_WRITE_ERROR_VIEW = View.create( - View.Name.create("cloud.google.com/java/mirroring/write_mismatch"), - "Detected write mismatches count.", - WRITE_MISMATCHES, + View.Name.create("cloud.google.com/java/mirroring/secondary_write_error"), + "Secondary database write error count.", + SECONDARY_WRITE_ERRORS, SUM, ImmutableList.of(OPERATION_KEY)); + /** {@link View} for Mirroring client's secondary read mismatches. */ + private static final View READ_MATCH_VIEW = + View.create( + View.Name.create("cloud.google.com/java/mirroring/read_match"), + "Detected read matches count.", + READ_MATCHES, + SUM, + ImmutableList.of(OPERATION_KEY)); + + /** {@link View} for Mirroring client's secondary write error handling latency. */ + private static final View FLOW_CONTROL_LATENCY_VIEW = + View.create( + View.Name.create("cloud.google.com/java/mirroring/flow_control_latency"), + "Distribution of latency of acquiring flow controller resources.", + FLOW_CONTROL_LATENCY, + AGGREGATION_WITH_MILLIS_HISTOGRAM, + new ArrayList()); + + /** {@link View} for Mirroring client's secondary write error handling latency. */ + private static final View SECONDARY_WRITE_ERROR_HANDLER_LATENCY_VIEW = + View.create( + View.Name.create("cloud.google.com/java/mirroring/secondary_write_error_handler_latency"), + "Distribution of secondary write error handling latency.", + SECONDARY_WRITE_ERROR_HANDLER_LATENCY, + AGGREGATION_WITH_MILLIS_HISTOGRAM, + new ArrayList()); + + // TODO: Add a new view "Mirroring operation failed" that tells you if a high level mirroring + // operation failed. It could fail due to a primary or secondary failure (for say concurrent + // writes). + private static final ImmutableSet MIRRORING_CLIENT_VIEWS_SET = ImmutableSet.of( PRIMARY_OPERATION_LATENCY_VIEW, @@ -123,8 +159,11 @@ public class MirroringMetricsViews { SECONDARY_OPERATION_LATENCY_VIEW, SECONDARY_OPERATION_ERROR_VIEW, MIRRORING_OPERATION_LATENCY_VIEW, + READ_MATCH_VIEW, READ_MISMATCH_VIEW, - WRITE_MISMATCH_VIEW); + SECONDARY_WRITE_ERROR_VIEW, + FLOW_CONTROL_LATENCY_VIEW, + SECONDARY_WRITE_ERROR_HANDLER_LATENCY_VIEW); /** Registers all Mirroring client views to OpenCensus View. */ public static void registerMirroringClientViews() { diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanConstants.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanConstants.java index 575007fdf0..d28232f671 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanConstants.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanConstants.java @@ -52,15 +52,35 @@ public class MirroringSpanConstants { "Count of errors on secondary database.", "1"); + public static final MeasureLong READ_MATCHES = + MeasureLong.create( + "com/google/cloud/bigtable/mirroring/read_verification/matches", + "Count of successfully verified reads.", + "1"); + public static final MeasureLong READ_MISMATCHES = MeasureLong.create( - "com/google/cloud/bigtable/mirroring/mismatch/read", + "com/google/cloud/bigtable/mirroring/read_verification/mismatches", "Count of read mismatches detected.", "1"); - public static final MeasureLong WRITE_MISMATCHES = + public static final MeasureLong SECONDARY_WRITE_ERRORS = + MeasureLong.create( + "com/google/cloud/bigtable/mirroring/secondary/write_error_rate", + "Count of write errors on secondary database.", + "1"); + + public static final MeasureLong FLOW_CONTROL_LATENCY = MeasureLong.create( - "com/google/cloud/bigtable/mirroring/mismatch/write", "Count of write mismatches.", "1"); + "com/google/cloud/bigtable/mirroring/flow_control_latency", + "Distribution of latency of acquiring flow controller resources.", + "ms"); + + public static final MeasureLong SECONDARY_WRITE_ERROR_HANDLER_LATENCY = + MeasureLong.create( + "com/google/cloud/bigtable/mirroring/secondary_write_error_handler_latency", + "Distribution of secondary write error handling latency.", + "ms"); public static TagKey OPERATION_KEY = TagKey.create("operation"); @@ -86,7 +106,6 @@ public enum HBaseOperation { BATCH_CALLBACK("batchCallback"), TABLE_CLOSE("close"), GET_TABLE("getTable"), - GET_BUFFERED_MUTATOR("getBufferedMutator"), BUFFERED_MUTATOR_FLUSH("flush"), BUFFERED_MUTATOR_MUTATE("mutate"), BUFFERED_MUTATOR_MUTATE_LIST("mutateList"), diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanFactory.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanFactory.java index 886da331f4..9801ac437a 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanFactory.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringSpanFactory.java @@ -15,11 +15,13 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.FLOW_CONTROL_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.MIRRORING_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.PRIMARY_ERRORS; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.PRIMARY_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_ERRORS; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_LATENCY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_WRITE_ERROR_HANDLER_LATENCY; import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.WriteOperationFutureCallback; @@ -40,6 +42,12 @@ import java.util.concurrent.TimeUnit; import org.checkerframework.checker.nullness.compatqual.NullableDecl; +/** + * Used to create named spans for tracing (using {@link #tracer}) and recording metrics related to + * those spans (using {@link #mirroringMetricsRecorder}). + * + *

Created by {@link MirroringTracer}. + */ @InternalApi("For internal usage only") public class MirroringSpanFactory { private final Tracer tracer; @@ -110,37 +118,35 @@ private Span asyncCloseSpan() { } public T wrapPrimaryOperation( - CallableThrowingIOException operationRunner, HBaseOperation operationName) - throws IOException { + CallableThrowingIOException operations, HBaseOperation operationName) throws IOException { try { - return wrapPrimaryOperationAndMeasure(operationRunner, operationName); + return wrapPrimaryOperationAndMeasure(operations, operationName); } catch (InterruptedException e) { - assert false; - throw new IllegalStateException(); + throw new IllegalStateException( + "CallableThrowingIOException shouldn't throw InterruptedException."); } } public void wrapPrimaryOperation( - CallableThrowingIOAndInterruptedException operationRunner, HBaseOperation operationName) + CallableThrowingIOAndInterruptedException operations, HBaseOperation operationName) throws IOException, InterruptedException { - wrapPrimaryOperationAndMeasure(operationRunner, operationName); + wrapPrimaryOperationAndMeasure(operations, operationName); } public T wrapSecondaryOperation( - CallableThrowingIOException operationRunner, HBaseOperation operationName) - throws IOException { + CallableThrowingIOException operations, HBaseOperation operationName) throws IOException { try { - return wrapSecondaryOperationAndMeasure(operationRunner, operationName); + return wrapSecondaryOperationAndMeasure(operations, operationName); } catch (InterruptedException e) { - assert false; - throw new IllegalStateException(); + throw new IllegalStateException( + "CallableThrowingIOException shouldn't throw InterruptedException."); } } public T wrapSecondaryOperation( - CallableThrowingIOAndInterruptedException operationRunner, HBaseOperation operationName) + CallableThrowingIOAndInterruptedException operations, HBaseOperation operationName) throws IOException, InterruptedException { - return wrapSecondaryOperationAndMeasure(operationRunner, operationName); + return wrapSecondaryOperationAndMeasure(operations, operationName); } public FutureCallback wrapReadVerificationCallback(final FutureCallback callback) { @@ -161,9 +167,18 @@ public void onFailure(Throwable throwable) { }; } - public WriteOperationFutureCallback wrapWriteOperationCallback( + public FutureCallback wrapWriteOperationCallback( + final HBaseOperation operation, + final MirroringTracer mirroringTracer, final WriteOperationFutureCallback callback) { - return new WriteOperationFutureCallback() { + // WriteOperationFutureCallback already defines always empty `onSuccess` method, no need to wrap + // it. + return new FutureCallback() { + @Override + public void onSuccess(@NullableDecl T t) { + mirroringTracer.metricsRecorder.recordSecondaryWriteErrors(operation, 0); + } + @Override public void onFailure(Throwable throwable) { try (Scope scope = MirroringSpanFactory.this.writeErrorScope()) { @@ -174,7 +189,7 @@ public void onFailure(Throwable throwable) { } public Scope flowControlScope() { - return flowControlSpanBuilder().startScopedSpan(); + return new StopwatchScope(flowControlSpanBuilder().startScopedSpan(), FLOW_CONTROL_LATENCY); } public Scope verificationScope() { @@ -182,7 +197,8 @@ public Scope verificationScope() { } public Scope writeErrorScope() { - return tracer.spanBuilder("writeErrors").startScopedSpan(); + return new StopwatchScope( + tracer.spanBuilder("writeErrors").startScopedSpan(), SECONDARY_WRITE_ERROR_HANDLER_LATENCY); } public Scope operationScope(HBaseOperation name) { @@ -202,21 +218,17 @@ public Scope spanAsScope(Span span) { } private T wrapPrimaryOperationAndMeasure( - CallableThrowingIOAndInterruptedException operationRunner, HBaseOperation operationName) + CallableThrowingIOAndInterruptedException operations, HBaseOperation operationName) throws IOException, InterruptedException { return wrapOperationAndMeasure( - operationRunner, - PRIMARY_LATENCY, - PRIMARY_ERRORS, - this.primaryOperationScope(), - operationName); + operations, PRIMARY_LATENCY, PRIMARY_ERRORS, this.primaryOperationScope(), operationName); } private T wrapSecondaryOperationAndMeasure( - CallableThrowingIOAndInterruptedException operationRunner, HBaseOperation operationName) + CallableThrowingIOAndInterruptedException operations, HBaseOperation operationName) throws IOException, InterruptedException { return wrapOperationAndMeasure( - operationRunner, + operations, SECONDARY_LATENCY, SECONDARY_ERRORS, this.secondaryOperationsScope(), @@ -224,7 +236,7 @@ private T wrapSecondaryOperationAndMeasure( } private T wrapOperationAndMeasure( - CallableThrowingIOAndInterruptedException operationRunner, + CallableThrowingIOAndInterruptedException operations, MeasureLong latencyMeasure, MeasureLong errorMeasure, Scope scope, @@ -235,8 +247,8 @@ private T wrapOperationAndMeasure( Stopwatch stopwatch = Stopwatch.createUnstarted(); try (Scope scope1 = scope) { stopwatch.start(); - return operationRunner.call(); - } catch (IOException | InterruptedException e) { + return operations.call(); + } catch (IOException | InterruptedException | RuntimeException e) { operationFailed = true; throw e; } finally { @@ -282,4 +294,24 @@ public void close() { this.scope.close(); } } + + private class StopwatchScope implements Scope { + private final Stopwatch stopwatch; + private final Scope scope; + private final MeasureLong measure; + + public StopwatchScope(Scope scope, MeasureLong measure) { + this.scope = scope; + this.stopwatch = Stopwatch.createStarted(); + this.measure = measure; + } + + @Override + public void close() { + this.stopwatch.stop(); + this.scope.close(); + MirroringSpanFactory.this.mirroringMetricsRecorder.recordLatency( + this.measure, this.stopwatch.elapsed(TimeUnit.MILLISECONDS)); + } + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringTracer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringTracer.java index 34648493dd..633f7626fe 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringTracer.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/mirroringmetrics/MirroringTracer.java @@ -20,6 +20,13 @@ import io.opencensus.tags.Tags; import io.opencensus.trace.Tracing; +/** + * Sets up {@link MirroringSpanFactory} and {@link MirroringMetricsRecorder} using default {@link + * io.opencensus.tags.Tagger}, {@link io.opencensus.stats.StatsRecorder} and {@link + * io.opencensus.trace.Tracer}. + * + *

Used as a provider for {@link MirroringSpanFactory} and {@link MirroringMetricsRecorder}. + */ @InternalApi("For internal usage only") public class MirroringTracer { public final MirroringSpanFactory spanFactory; diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/HierarchicalReferenceCounter.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/HierarchicalReferenceCounter.java new file mode 100644 index 0000000000..6060307b4a --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/HierarchicalReferenceCounter.java @@ -0,0 +1,76 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting; + +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Table; + +/** + * MirroringClient uses asynchronous tasks to perform operations on the secondary database. + * Operations on the secondary database can take longer than operation on primary and thus we might + * have a long tail of scheduled or in-flight operations (the length of the tail is limited by the + * {@link com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController}). For this + * reason we have to count scheduled requests and prevent closing resources they use (Tables, + * ResultScanners, Connections, etc.) before all requests using that resource are completed. To + * achieve this we are using manual reference counting - each scheduled asynchronous operation + * increments reference counter of all resources it is using (e.g. scheduled {@link Table#get(Get)} + * increments reference counter of the Table and Connection, {@link ResultScanner#next()} increments + * ResultScanner's, Table's and Connection's reference counters). + * + *

There are other, less complicated approaches possible (for example we could have only a single + * reference counter for Connection object, or a bit simpler hierarchical approach in which every + * level of the hierarchy (Connection, Table, ResultScanner) counts its requests and only increments + * (decrements) reference counter of their parent level when it is created (closed)) but they + * wouldn't allow us to achieve two goals of our design: + * + *

    + *
  • If the user calls close() on a Mirroring* object, we should also call close() on underlying + * secondary object, but only after all requests using that object have finished. + *
  • If the user doesn't call close() on a Mirroring* object (because some users might not close + * their Tables) we shouldn't close underlying secondary as well, but this shouldn't prevent + * closing its parent Connection object if the user closes it. + *
+ * + * The second point is critical for us because {@link MirroringConnection#close()} call blocks until + * all requests using that Connection are finished. Counting created Tables (which would happen in + * the second example of simpler reference counting scheme described above) would cause deadlocks. + */ +public class HierarchicalReferenceCounter implements ReferenceCounter { + + /** Counter of asynchronous requests from current hierarchy level. */ + public final ListenableReferenceCounter current; + /** Counter of asynchronous requests from hierarchy above the current level. */ + public final ReferenceCounter parent; + + public HierarchicalReferenceCounter(ReferenceCounter parentReferenceCounter) { + this.current = new ListenableReferenceCounter(); + this.parent = parentReferenceCounter; + } + + @Override + public void incrementReferenceCount() { + this.current.incrementReferenceCount(); + this.parent.incrementReferenceCount(); + } + + @Override + public void decrementReferenceCount() { + this.parent.decrementReferenceCount(); + this.current.decrementReferenceCount(); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/ListenableReferenceCounter.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ListenableReferenceCounter.java similarity index 65% rename from bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/ListenableReferenceCounter.java rename to bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ListenableReferenceCounter.java index 189d35e12e..38568fc8fa 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/ListenableReferenceCounter.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ListenableReferenceCounter.java @@ -13,11 +13,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package com.google.cloud.bigtable.mirroring.hbase1_x.utils; +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting; import com.google.api.core.InternalApi; import com.google.common.util.concurrent.ListenableFuture; -import com.google.common.util.concurrent.MoreExecutors; import com.google.common.util.concurrent.SettableFuture; import java.util.concurrent.Executor; import java.util.concurrent.atomic.AtomicInteger; @@ -32,7 +31,8 @@ * closing these resources while they have some scheduled or ongoing asynchronous operations. */ @InternalApi -public class ListenableReferenceCounter { +public class ListenableReferenceCounter implements ReferenceCounter { + private AtomicInteger referenceCount; private SettableFuture onLastReferenceClosed; @@ -54,29 +54,4 @@ public void decrementReferenceCount() { public ListenableFuture getOnLastReferenceClosed() { return this.onLastReferenceClosed; } - - /** Increments the reference counter and decrements it after the future is resolved. */ - public void holdReferenceUntilCompletion(ListenableFuture future) { - this.incrementReferenceCount(); - future.addListener( - new Runnable() { - @Override - public void run() { - ListenableReferenceCounter.this.decrementReferenceCount(); - } - }, - MoreExecutors.directExecutor()); - } - - /** Increments the reference counter and decrements it after the provided object is closed. */ - public void holdReferenceUntilClosing(ListenableCloseable listenableCloseable) { - this.incrementReferenceCount(); - listenableCloseable.addOnCloseListener( - new Runnable() { - @Override - public void run() { - ListenableReferenceCounter.this.decrementReferenceCount(); - } - }); - } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/ListenableCloseable.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ReferenceCounter.java similarity index 54% rename from bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/ListenableCloseable.java rename to bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ReferenceCounter.java index 55ef5819c3..cad939c391 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/ListenableCloseable.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ReferenceCounter.java @@ -13,14 +13,14 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package com.google.cloud.bigtable.mirroring.hbase1_x.utils; +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting; /** - * Objects that can run registered listeners after they are closed. Facilitates reference counting - * using {@link ListenableReferenceCounter}, objects of classes implementing this interface can be - * used in {@link ListenableReferenceCounter#holdReferenceUntilClosing(ListenableCloseable)}, the - * reference is decreased after the referenced object is closed. + * Common interface for {@link HierarchicalReferenceCounter} and {@link ListenableReferenceCounter}. + * Consult their documentation for description of use cases. */ -public interface ListenableCloseable { - void addOnCloseListener(Runnable listener); +public interface ReferenceCounter { + void incrementReferenceCount(); + + void decrementReferenceCount(); } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ReferenceCounterUtils.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ReferenceCounterUtils.java new file mode 100644 index 0000000000..c45078a340 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/ReferenceCounterUtils.java @@ -0,0 +1,35 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting; + +import com.google.common.util.concurrent.ListenableFuture; +import com.google.common.util.concurrent.MoreExecutors; + +public class ReferenceCounterUtils { + /** Increments the reference counter and decrements it after the future is resolved. */ + public static void holdReferenceUntilCompletion( + final ReferenceCounter referenceCounter, ListenableFuture future) { + referenceCounter.incrementReferenceCount(); + future.addListener( + new Runnable() { + @Override + public void run() { + referenceCounter.decrementReferenceCount(); + } + }, + MoreExecutors.directExecutor()); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/reflection/ReflectionConstructor.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/reflection/ReflectionConstructor.java deleted file mode 100644 index f6f7eda8bb..0000000000 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/reflection/ReflectionConstructor.java +++ /dev/null @@ -1,56 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package com.google.cloud.bigtable.mirroring.hbase1_x.utils.reflection; - -import java.lang.reflect.Constructor; -import java.lang.reflect.InvocationTargetException; -import java.util.ArrayList; -import java.util.List; - -public class ReflectionConstructor { - public static T construct(String className, Object... params) { - List> constructorArgs = new ArrayList<>(); - for (Object param : params) { - constructorArgs.add(param.getClass()); - } - Constructor constructor = - getConstructor(className, constructorArgs.toArray(new Class[0])); - try { - return constructor.newInstance(params); - } catch (InstantiationException | IllegalAccessException | InvocationTargetException e) { - throw new RuntimeException(e); - } - } - - public static T construct(String className, Class paramClass, Object param) { - Constructor constructor = getConstructor(className, paramClass); - try { - return constructor.newInstance(param); - } catch (InstantiationException | IllegalAccessException | InvocationTargetException e) { - throw new RuntimeException(e); - } - } - - private static Constructor getConstructor(String className, Class... parameterTypes) { - try { - @SuppressWarnings("unchecked") - Class c = (Class) Class.forName(className); - return c.getDeclaredConstructor(parameterTypes); - } catch (ClassNotFoundException | ClassCastException | NoSuchMethodException e) { - throw new RuntimeException(e); - } - } -} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/CopyingTimestamper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/CopyingTimestamper.java new file mode 100644 index 0000000000..89ff802f45 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/CopyingTimestamper.java @@ -0,0 +1,136 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import com.google.api.core.InternalApi; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.NavigableMap; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; + +@InternalApi("For internal use only") +public class CopyingTimestamper implements Timestamper { + private final MonotonicTimer timer = new MonotonicTimer(); + + @Override + public Put fillTimestamp(Put put) { + long timestamp = timer.getCurrentTimeMillis(); + return setPutTimestamp(put, timestamp); + } + + @Override + public RowMutations fillTimestamp(RowMutations rowMutations) { + long timestamp = timer.getCurrentTimeMillis(); + return setRowMutationsTimestamp(rowMutations, timestamp); + } + + @Override + public Mutation fillTimestamp(Mutation mutation) { + if (mutation instanceof Put) { + return fillTimestamp((Put) mutation); + } + return mutation; + } + + @Override + public List fillTimestamp(List list) { + long timestamp = timer.getCurrentTimeMillis(); + List result = new ArrayList<>(); + for (T row : list) { + result.add(setTimestamp(row, timestamp)); + } + return result; + } + + private T setTimestamp(T row, long timestamp) { + // Those casts are totally safe, but there are not subclasses of Put and RowMutations that we + // know of. + if (row instanceof Put) { + return (T) setPutTimestamp((Put) row, timestamp); + } else if (row instanceof RowMutations) { + return (T) setRowMutationsTimestamp((RowMutations) row, timestamp); + } + // Bigtable doesn't support timestamps for Increment and Append and only a specific subset of + // Deletes, let's not modify them. + return row; + } + + private Put setPutTimestamp(Put put, long timestamp) { + Put putCopy = clonePut(put); + TimestampUtils.setPutTimestamp(putCopy, timestamp); + return putCopy; + } + + private Put clonePut(Put toClone) { + // This copy shares Cells with the original. + Put putCopy = new Put(toClone); + cloneFamilyCallMap(putCopy.getFamilyCellMap()); + return putCopy; + } + + private void cloneFamilyCallMap(NavigableMap> familyCellMap) { + for (List cells : familyCellMap.values()) { + cloneCellList(cells); + } + } + + private void cloneCellList(List cells) { + for (int i = 0; i < cells.size(); i++) { + cells.set(i, cloneCell(cells.get(i))); + } + } + + private Cell cloneCell(Cell cell) { + if (!(cell instanceof KeyValue)) { + throw new RuntimeException( + "CopyingTimestamper doesn't support Puts with cells different than the default KeyValue cell."); + } + try { + return ((KeyValue) cell).clone(); + } catch (CloneNotSupportedException e) { + throw new RuntimeException( + "KeyValue implementation doesn't support clone() method and CopyingTimestamper cannot use it."); + } + } + + private RowMutations setRowMutationsTimestamp(RowMutations rowMutations, long timestamp) { + try { + RowMutations result = new RowMutations(rowMutations.getRow()); + for (Mutation mutation : rowMutations.getMutations()) { + if (mutation instanceof Put) { + result.add(setPutTimestamp((Put) mutation, timestamp)); + } else if (mutation instanceof Delete) { + result.add((Delete) mutation); + } else { + // Only and `Delete`s and `Put`s are supported. + throw new UnsupportedOperationException(); + } + } + return result; + } catch (IOException e) { + // IOException is thrown when row of added mutation doesn't match `RowMutation`s row. + // This shouldn't happen. + throw new RuntimeException(e); + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/InPlaceTimestamper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/InPlaceTimestamper.java new file mode 100644 index 0000000000..6374ed4c10 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/InPlaceTimestamper.java @@ -0,0 +1,77 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import com.google.api.core.InternalApi; +import java.util.List; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; + +@InternalApi("For internal use only") +public class InPlaceTimestamper implements Timestamper { + private final MonotonicTimer timer = new MonotonicTimer(); + + @Override + public Put fillTimestamp(Put put) { + long timestamp = timer.getCurrentTimeMillis(); + TimestampUtils.setPutTimestamp(put, timestamp); + return put; + } + + @Override + public RowMutations fillTimestamp(RowMutations rowMutations) { + long timestamp = timer.getCurrentTimeMillis(); + setRowMutationsTimestamp(rowMutations, timestamp); + return rowMutations; + } + + @Override + public Mutation fillTimestamp(Mutation mutation) { + if (mutation instanceof Put) { + return fillTimestamp((Put) mutation); + } + return mutation; + } + + @Override + public List fillTimestamp(List list) { + long timestamp = timer.getCurrentTimeMillis(); + for (T r : list) { + setTimestamp(r, timestamp); + } + return list; + } + + private void setTimestamp(Row row, long timestamp) { + // Those casts are totally safe, but there are not subclasses of Put and RowMutations that we + // know of. + if (row instanceof Put) { + TimestampUtils.setPutTimestamp((Put) row, timestamp); + } else if (row instanceof RowMutations) { + setRowMutationsTimestamp((RowMutations) row, timestamp); + } + // Bigtable doesn't support timestamps for Increment and Append and only a specific subset of + // Deletes, let's not modify them. + } + + private void setRowMutationsTimestamp(RowMutations rowMutations, long timestamp) { + for (Mutation mutation : rowMutations.getMutations()) { + setTimestamp(mutation, timestamp); + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/MonotonicTimer.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/MonotonicTimer.java new file mode 100644 index 0000000000..211a0fd89c --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/MonotonicTimer.java @@ -0,0 +1,43 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import com.google.common.base.Stopwatch; +import java.util.concurrent.TimeUnit; + +/** + * {@code System#currentTimeMillis()} is not monotonic and using it as a source for {@link + * org.apache.hadoop.hbase.client.Mutation} timestamps can result in confusion and unexpected + * reordering of written versions. + * + *

This class provides a monotonically increasing value that is related to wall time. + * + *

Guava's {@link Stopwatch} is monotonic because it uses {@link System#nanoTime()} to measure + * passed time. + */ +public class MonotonicTimer { + private final long startingTimestampMillis; + private final Stopwatch stopwatch; + + public MonotonicTimer() { + this.startingTimestampMillis = System.currentTimeMillis(); + this.stopwatch = Stopwatch.createStarted(); + } + + public long getCurrentTimeMillis() { + return this.startingTimestampMillis + this.stopwatch.elapsed(TimeUnit.MILLISECONDS); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/NoopTimestamper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/NoopTimestamper.java new file mode 100644 index 0000000000..44839fbbee --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/NoopTimestamper.java @@ -0,0 +1,47 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import com.google.api.core.InternalApi; +import java.util.List; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; + +@InternalApi("For internal use only") +public class NoopTimestamper implements Timestamper { + + @Override + public List fillTimestamp(List list) { + return list; + } + + @Override + public RowMutations fillTimestamp(RowMutations rowMutations) { + return rowMutations; + } + + @Override + public Put fillTimestamp(Put put) { + return put; + } + + @Override + public Mutation fillTimestamp(Mutation mutation) { + return mutation; + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TimestampUtils.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TimestampUtils.java new file mode 100644 index 0000000000..dde05ff30b --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TimestampUtils.java @@ -0,0 +1,48 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import java.io.IOException; +import java.util.List; +import java.util.Map; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.client.Put; + +public class TimestampUtils { + public static void setPutTimestamp(Put put, long timestamp) { + for (Map.Entry> entry : put.getFamilyCellMap().entrySet()) { + for (Cell cell : entry.getValue()) { + try { + if (isTimestampNotSet(cell.getTimestamp())) { + CellUtil.setTimestamp(cell, timestamp); + } + } catch (IOException e) { + // IOException is thrown when `cell` does not implement `SettableTimestamp` and if it + // doesn't the we do not have any reliable way for setting the timestamp, thus we are just + // leaving it as-is. + // This shouldn't happen for vanilla `Put` instances. + throw new RuntimeException(e); + } + } + } + } + + private static boolean isTimestampNotSet(long timestamp) { + return timestamp == HConstants.LATEST_TIMESTAMP; + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/Timestamper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/Timestamper.java new file mode 100644 index 0000000000..5e926aa534 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/Timestamper.java @@ -0,0 +1,56 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import com.google.api.core.InternalApi; +import java.util.List; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; + +/** + * Timestamper implementations are responsible for adding (or not) timestamps to {@link Put}s before + * they are sent to underlying databases. + */ +@InternalApi("For internal use only") +public interface Timestamper { + + List fillTimestamp(List list); + + RowMutations fillTimestamp(RowMutations rowMutations); + + Put fillTimestamp(Put put); + + Mutation fillTimestamp(Mutation mutation); + + enum TimestampingMode { + disabled, + inplace, + copy, + } + + static Timestamper create(TimestampingMode mode) { + switch (mode) { + case inplace: + return new InPlaceTimestamper(); + case copy: + return new CopyingTimestamper(); + default: + return new NoopTimestamper(); + } + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/DefaultMismatchDetector.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/DefaultMismatchDetector.java index 270e143afc..e70f9045b8 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/DefaultMismatchDetector.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/DefaultMismatchDetector.java @@ -15,122 +15,449 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.verification; +import static com.google.cloud.bigtable.mirroring.hbase1_x.verification.DefaultMismatchDetector.LazyBytesHexlifier.listOfHexRows; + import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Comparators; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringMetricsRecorder; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import java.nio.ByteBuffer; +import java.util.ArrayList; import java.util.Arrays; +import java.util.Deque; +import java.util.HashSet; +import java.util.Iterator; +import java.util.LinkedList; import java.util.List; +import java.util.Set; +import java.util.SortedSet; +import java.util.TreeSet; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.util.Bytes; @InternalApi("For internal usage only") public class DefaultMismatchDetector implements MismatchDetector { + private final int maxValueBytesLogged; + private static final Logger Log = new Logger(DefaultMismatchDetector.class); private final MirroringMetricsRecorder metricsRecorder; - public DefaultMismatchDetector(MirroringTracer mirroringTracer) { + public DefaultMismatchDetector(MirroringTracer mirroringTracer, Integer maxValueBytesLogged) { this.metricsRecorder = mirroringTracer.metricsRecorder; + this.maxValueBytesLogged = maxValueBytesLogged; } public void exists(Get request, boolean primary, boolean secondary) { - if (primary != secondary) { - // TODO: change system.out.println to Log.trace() - System.out.println("exists mismatch"); + if (primary == secondary) { + this.metricsRecorder.recordReadMatches(HBaseOperation.EXISTS, 1); + this.metricsRecorder.recordReadMismatches(HBaseOperation.EXISTS, 0); + } else { + Log.debug( + "exists(row=%s) mismatch: (%b, %b)", + new LazyBytesHexlifier(request.getRow(), maxValueBytesLogged), primary, secondary); this.metricsRecorder.recordReadMismatches(HBaseOperation.EXISTS, 1); } } @Override public void exists(Get request, Throwable throwable) { - System.out.println("exists failed"); + Log.debug( + "exists(row=%s) failed: (throwable=%s)", + new LazyBytesHexlifier(request.getRow(), maxValueBytesLogged), throwable); } @Override public void existsAll(List request, boolean[] primary, boolean[] secondary) { if (!Arrays.equals(primary, secondary)) { - System.out.println("existsAll mismatch"); - this.metricsRecorder.recordReadMismatches(HBaseOperation.EXISTS, primary.length); + int mismatches = 0; + for (int i = 0; i < primary.length; i++) { + if (primary[i] != secondary[i]) { + Log.debug( + "existsAll(row=%s) mismatch: (%b, %b)", + new LazyBytesHexlifier(request.get(i).getRow(), maxValueBytesLogged), + primary[i], + secondary[i]); + mismatches++; + } + } + if (mismatches != primary.length) { + this.metricsRecorder.recordReadMatches( + HBaseOperation.EXISTS_ALL, primary.length - mismatches); + } + this.metricsRecorder.recordReadMismatches(HBaseOperation.EXISTS_ALL, mismatches); + } else { + this.metricsRecorder.recordReadMismatches(HBaseOperation.EXISTS_ALL, 0); } } @Override public void existsAll(List request, Throwable throwable) { - System.out.println("existsAll failed"); + Log.debug( + "existsAll(rows=%s) failed: (throwable=%s)", + listOfHexRows(request, maxValueBytesLogged), throwable); } public void get(Get request, Result primary, Result secondary) { - if (!Comparators.resultsEqual(primary, secondary)) { - System.out.println("get mismatch"); + if (Comparators.resultsEqual(primary, secondary)) { + this.metricsRecorder.recordReadMatches(HBaseOperation.GET, 1); + this.metricsRecorder.recordReadMismatches(HBaseOperation.GET, 0); + } else { + Log.debug( + "get(row=%s) mismatch: (%s, %s)", + new LazyBytesHexlifier(request.getRow(), maxValueBytesLogged), + new LazyBytesHexlifier(getResultValue(primary), maxValueBytesLogged), + new LazyBytesHexlifier(getResultValue(secondary), maxValueBytesLogged)); this.metricsRecorder.recordReadMismatches(HBaseOperation.GET, 1); } } @Override public void get(Get request, Throwable throwable) { - System.out.println("get failed"); + Log.debug( + "get(row=%s) failed: (throwable=%s)", + new LazyBytesHexlifier(request.getRow(), maxValueBytesLogged), throwable); } @Override public void get(List request, Result[] primary, Result[] secondary) { - verifyResults(primary, secondary, "getAll mismatch", HBaseOperation.GET_LIST); + verifyBatchGet(primary, secondary, "get", HBaseOperation.GET_LIST); } @Override public void get(List request, Throwable throwable) { - System.out.println("getAll failed"); + Log.debug( + "get(rows=%s) failed: (throwable=%s)", + listOfHexRows(request, maxValueBytesLogged), throwable); } @Override - public void scannerNext(Scan request, int entriesAlreadyRead, Result primary, Result secondary) { - if (!Comparators.resultsEqual(primary, secondary)) { - System.out.println("scan() mismatch"); - this.metricsRecorder.recordReadMismatches(HBaseOperation.NEXT, 1); - } + public void scannerNext( + Scan request, ScannerResultVerifier scanResultVerifier, Result primary, Result secondary) { + scanResultVerifier.verify(new Result[] {primary}, new Result[] {secondary}); } @Override - public void scannerNext(Scan request, int entriesAlreadyRead, Throwable throwable) { - System.out.println("scan() failed"); + public void scannerNext(Scan request, Throwable throwable) { + Log.debug("scan(id=%s) failed: (throwable=%s)", request.getId(), throwable); } @Override public void scannerNext( - Scan request, int entriesAlreadyRead, Result[] primary, Result[] secondary) { - // TODO: try to find all matching elements and report only those that were not successful - verifyResults(primary, secondary, "scan(i) mismatch", HBaseOperation.NEXT_MULTIPLE); + Scan request, + ScannerResultVerifier scanResultVerifier, + Result[] primary, + Result[] secondary) { + scanResultVerifier.verify(primary, secondary); } @Override - public void scannerNext( - Scan request, int entriesAlreadyRead, int entriesRequested, Throwable throwable) { - System.out.println("scan(i) failed"); + public void scannerNext(Scan request, int entriesRequested, Throwable throwable) { + Log.debug("scan(id=%s) failed: (throwable=%s)", request.getId(), throwable); } @Override public void batch(List request, Result[] primary, Result[] secondary) { - verifyResults(primary, secondary, "batch() mismatch", HBaseOperation.BATCH); + verifyBatchGet(primary, secondary, "batch", HBaseOperation.BATCH); } @Override public void batch(List request, Throwable throwable) { - System.out.println("batch() failed"); + Log.debug( + "batch(rows=%s) failed: (throwable=%s)", + listOfHexRows(request, maxValueBytesLogged), throwable); } - private void verifyResults( - Result[] primary, Result[] secondary, String errorMessage, HBaseOperation operation) { - int minLength = Math.min(primary.length, secondary.length); - int errors = Math.max(primary.length, secondary.length) - minLength; - for (int i = 0; i < minLength; i++) { + private void verifyBatchGet( + Result[] primary, Result[] secondary, String operationName, HBaseOperation operation) { + int errors = 0; + int matches = 0; + for (int i = 0; i < primary.length; i++) { if (Comparators.resultsEqual(primary[i], secondary[i])) { - // TODO: We can use code from bigtable client to properly crop row keys which might be very long. - System.out.println(errorMessage); + matches++; + } else { + Log.debug( + "%s(row=%s) mismatch: (%s, %s)", + operationName, + new LazyBytesHexlifier(getResultRow(primary[i]), maxValueBytesLogged), + new LazyBytesHexlifier(getResultValue(primary[i]), maxValueBytesLogged), + new LazyBytesHexlifier(getResultValue(secondary[i]), maxValueBytesLogged)); errors++; } } - if (errors > 0) { - this.metricsRecorder.recordReadMismatches(operation, errors); + if (matches > 0) { + this.metricsRecorder.recordReadMatches(operation, matches); + } + this.metricsRecorder.recordReadMismatches(operation, errors); + } + + private byte[] getResultValue(Result result) { + return result == null ? null : result.value(); + } + + private byte[] getResultRow(Result result) { + return result == null ? null : result.getRow(); + } + + @Override + public ScannerResultVerifier createScannerResultVerifier(Scan request, int maxBufferedResults) { + return new DefaultScannerResultVerifier(request, maxBufferedResults); + } + + /** + * Helper class used to detect non-trivial mismatches in scan operations. + * + *

Assumption: scanners return results ordered lexicographically by row key. + */ + public class DefaultScannerResultVerifier implements ScannerResultVerifier { + + private final LinkedList primaryMismatchBuffer; + private final Set primaryKeys; + + private final LinkedList secondaryMismatchBuffer; + private final Set secondaryKeys; + + private final SortedSet commonRowKeys; + + private final int sizeLimit; + private final Scan scanRequest; + + private DefaultScannerResultVerifier(Scan scan, int sizeLimit) { + this.scanRequest = scan; + this.sizeLimit = sizeLimit; + this.primaryMismatchBuffer = new LinkedList<>(); + this.primaryKeys = new HashSet<>(); + this.secondaryMismatchBuffer = new LinkedList<>(); + this.secondaryKeys = new HashSet<>(); + this.commonRowKeys = new TreeSet<>(); + } + + @Override + public void flush() { + this.shrinkBuffer(this.primaryMismatchBuffer, this.primaryKeys, "primary", 0); + this.shrinkBuffer(this.secondaryMismatchBuffer, this.secondaryKeys, "secondary", 0); + } + + @Override + public void verify(Result[] primary, Result[] secondary) { + this.extendBuffers(primary, secondary); + this.matchResults(); + this.shrinkBuffers(); + } + + private void shrinkBuffers() { + this.shrinkBuffer(this.primaryMismatchBuffer, this.primaryKeys, "primary", this.sizeLimit); + this.shrinkBuffer( + this.secondaryMismatchBuffer, this.secondaryKeys, "secondary", this.sizeLimit); + } + + private void extendBuffers(Result[] primary, Result[] secondary) { + for (Result result : primary) { + // result.getRow() is not expected to return `null`, but we are handling this case to ease + // mocking in unit tests. + if (result == null || result.getRow() == null) { + continue; + } + this.primaryMismatchBuffer.add(result); + ResultRowKey rowKey = new ResultRowKey(result.getRow()); + this.primaryKeys.add(rowKey); + if (this.secondaryKeys.contains(rowKey)) { + this.commonRowKeys.add(rowKey); + } + } + + for (Result result : secondary) { + // result.getRow() is not expected to return `null`, but we are handling this case to ease + // mocking in unit tests. + if (result == null || result.getRow() == null) { + continue; + } + this.secondaryMismatchBuffer.add(result); + ResultRowKey rowKey = new ResultRowKey(result.getRow()); + this.secondaryKeys.add(rowKey); + if (this.primaryKeys.contains(rowKey)) { + this.commonRowKeys.add(rowKey); + } + } + } + + private void matchResults() { + for (ResultRowKey firstMatchingRowKey : this.commonRowKeys) { + Result primaryMatchingResult = + this.dropAndReportUntilMatch( + this.primaryMismatchBuffer, this.primaryKeys, "primary", firstMatchingRowKey); + Result secondaryMatchingResult = + this.dropAndReportUntilMatch( + this.secondaryMismatchBuffer, this.secondaryKeys, "secondary", firstMatchingRowKey); + this.compareMatchingRowsResults(primaryMatchingResult, secondaryMatchingResult); + } + this.commonRowKeys.clear(); + } + + private void compareMatchingRowsResults( + Result primaryMatchingResult, Result secondaryMatchingResult) { + if (!Comparators.resultsEqual(primaryMatchingResult, secondaryMatchingResult)) { + logAndRecordScanMismatch(primaryMatchingResult, secondaryMatchingResult); + } else { + metricsRecorder.recordReadMatches(HBaseOperation.NEXT, 1); + metricsRecorder.recordReadMismatches(HBaseOperation.NEXT, 0); + } + } + + private Result dropAndReportUntilMatch( + LinkedList buffer, + Set keySet, + String databaseName, + ResultRowKey matchingKey) { + Iterator bufferIterator = buffer.iterator(); + while (bufferIterator.hasNext()) { + Result result = bufferIterator.next(); + bufferIterator.remove(); + keySet.remove(new ResultRowKey(result.getRow())); + if (matchingKey.compareTo(result.getRow())) { + return result; + } else { + logAndReportMissingEntry(result, databaseName); + } + } + Log.error( + "DefaultScannerResultVerifier was not able to find matching element in buffered list and the invariant is broken."); + return null; + } + + private void shrinkBuffer( + Deque mismatchBuffer, Set keySet, String type, int targetSize) { + int toRemove = Math.max(0, mismatchBuffer.size() - targetSize); + for (int i = 0; i < toRemove; i++) { + Result result = mismatchBuffer.removeFirst(); + logAndReportMissingEntry(result, type); + keySet.remove(new ResultRowKey(result.getRow())); + } + } + + private void logAndReportMissingEntry(Result scanResult, String databaseName) { + Log.debug( + String.format( + "scan(id=%s) mismatch: only %s database contains (row=%s)", + this.scanRequest.getId(), + databaseName, + new LazyBytesHexlifier(scanResult.getRow(), maxValueBytesLogged))); + metricsRecorder.recordReadMismatches(HBaseOperation.NEXT, 1); + } + + private void logAndRecordScanMismatch(Result primaryResult, Result secondaryResult) { + Log.debug( + String.format( + "scan(id=%s) mismatch: databases contain different rows (row=%s)", + this.scanRequest.getId(), + new LazyBytesHexlifier(primaryResult.getRow(), maxValueBytesLogged))); + metricsRecorder.recordReadMismatches(HBaseOperation.NEXT, 1); + } + } + + // Used for logging. Overrides toString() in order to be as lazy as possible. + // Adapted from Apache Common Codec's Hex. + public static class LazyBytesHexlifier { + private static final char[] DIGITS = { + '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' + }; + + public static List listOfHexRows(List gets, int maxBytesPrinted) { + List out = new ArrayList<>(gets.size()); + for (Get get : gets) { + out.add(new LazyBytesHexlifier(get.getRow(), maxBytesPrinted)); + } + return out; + } + + private final byte[] bytes; + private final int maxBytesPrinted; + + public LazyBytesHexlifier(byte[] bytes, int maxBytesPrinted) { + this.bytes = bytes; + this.maxBytesPrinted = maxBytesPrinted; + } + + private void bytesToHex( + final char[] out, final int outOffset, final int bytesOffset, final int bytesLength) { + for (int i = bytesOffset, j = outOffset; i < bytesOffset + bytesLength; i++) { + out[j++] = DIGITS[(0xF0 & this.bytes[i]) >>> 4]; + out[j++] = DIGITS[0x0F & this.bytes[i]]; + } + } + + @Override + public String toString() { + if (this.bytes == null) { + return "null"; + } + + int bytesToPrint = Math.min(this.bytes.length, maxBytesPrinted); + if (bytesToPrint <= 0) { + return ""; + } + boolean skipSomeBytes = bytesToPrint != this.bytes.length; + char[] out; + if (skipSomeBytes) { + int numEndBytes = bytesToPrint / 2; + int numStartBytes = bytesToPrint - numEndBytes; + int numDots = 3; + + int startDotsIdx = 2 * numStartBytes; + int endDotsIdx = 2 * numStartBytes + numDots; + + out = new char[numDots + (bytesToPrint << 1)]; + + bytesToHex(out, 0, 0, numStartBytes); + for (int i = startDotsIdx; i < endDotsIdx; i++) { + out[i] = '.'; + } + bytesToHex(out, endDotsIdx, this.bytes.length - numEndBytes, numEndBytes); + } else { + out = new char[bytesToPrint << 1]; + bytesToHex(out, 0, 0, bytesToPrint); + } + return new String(out); + } + } + + /** + * Wrapper around byte[] that has correct hashCode, equals and is lexicographically comparable. + */ + public static class ResultRowKey implements Comparable { + private final ByteBuffer byteBuffer; + + public ResultRowKey(byte[] rowKey) { + this.byteBuffer = ByteBuffer.wrap(rowKey); + } + + @Override + public int hashCode() { + return this.byteBuffer.hashCode(); + } + + @Override + public boolean equals(Object obj) { + return obj.getClass() == this.getClass() + && this.byteBuffer.equals(((ResultRowKey) obj).byteBuffer); + } + + @Override + public int compareTo(ResultRowKey resultRowKey) { + return this.byteBuffer.compareTo(resultRowKey.byteBuffer); + } + + public boolean compareTo(byte[] bytes) { + return Bytes.compareTo(this.byteBuffer.array(), bytes) == 0; + } + } + + public static class Factory implements MismatchDetector.Factory { + @Override + public MismatchDetector create(MirroringTracer mirroringTracer, Integer maxValueBytesLogged) { + return new DefaultMismatchDetector(mirroringTracer, maxValueBytesLogged); } } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/MismatchDetector.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/MismatchDetector.java index 4251abbc75..ce5e29573d 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/MismatchDetector.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/MismatchDetector.java @@ -15,6 +15,7 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.verification; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; import java.util.List; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; @@ -42,15 +43,34 @@ public interface MismatchDetector { void get(List request, Throwable throwable); - void scannerNext(Scan request, int entriesAlreadyRead, Result primary, Result secondary); + void scannerNext( + Scan request, ScannerResultVerifier mismatches, Result primary, Result secondary); - void scannerNext(Scan request, int entriesAlreadyRead, Throwable throwable); + void scannerNext(Scan request, Throwable throwable); - void scannerNext(Scan request, int entriesAlreadyRead, Result[] primary, Result[] secondary); + void scannerNext( + Scan request, ScannerResultVerifier mismatches, Result[] primary, Result[] secondary); - void scannerNext(Scan request, int entriesAlreadyRead, int entriesRequested, Throwable throwable); + void scannerNext(Scan request, int entriesRequested, Throwable throwable); void batch(List request, Result[] primary, Result[] secondary); void batch(List request, Throwable throwable); + + interface Factory { + MismatchDetector create(MirroringTracer mirroringTracer, Integer maxLoggedBinaryValueLength) + throws Throwable; + } + + ScannerResultVerifier createScannerResultVerifier(Scan request, int maxBufferedResults); + + /** + * Interface of helper classes used to detect non-trivial mismatches in scan operations, such as + * elements missing in one of the databases. + */ + interface ScannerResultVerifier { + void verify(Result[] primary, Result[] secondary); + + void flush(); + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/VerificationContinuationFactory.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/VerificationContinuationFactory.java index 23942ccfa7..05c1e27aff 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/VerificationContinuationFactory.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase1_x/verification/VerificationContinuationFactory.java @@ -21,6 +21,7 @@ import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger; import com.google.common.util.concurrent.FutureCallback; import java.util.List; +import java.util.concurrent.ConcurrentLinkedQueue; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; @@ -115,21 +116,27 @@ private Result firstOrNull(Result[] results) { return results[0]; } - public FutureCallback scannerNext() { - return new FutureCallback() { + public FutureCallback scannerNext( + final Object verificationLock, + final ConcurrentLinkedQueue resultQueue, + final MismatchDetector.ScannerResultVerifier unmatched) { + return new FutureCallback() { @Override - public void onSuccess(@NullableDecl AsyncScannerVerificationPayload results) { - Log.trace("verification onSuccess scannerNext(Scan, int)"); - Scan request = results.context.scan; - if (results.context.singleNext) { - VerificationContinuationFactory.this.mismatchDetector.scannerNext( - request, - results.context.startingIndex, - firstOrNull(results.context.result), - firstOrNull(results.secondary)); - } else { - VerificationContinuationFactory.this.mismatchDetector.scannerNext( - request, results.context.startingIndex, results.context.result, results.secondary); + public void onSuccess(@NullableDecl Void ignored) { + synchronized (verificationLock) { + AsyncScannerVerificationPayload results = resultQueue.remove(); + Log.trace("verification onSuccess scannerNext(Scan, int)"); + Scan request = results.context.scan; + if (results.context.singleNext) { + VerificationContinuationFactory.this.mismatchDetector.scannerNext( + request, + unmatched, + firstOrNull(results.context.result), + firstOrNull(results.secondary)); + } else { + VerificationContinuationFactory.this.mismatchDetector.scannerNext( + request, unmatched, results.context.result, results.secondary); + } } } @@ -142,13 +149,10 @@ public void onFailure(Throwable throwable) { Scan request = exceptionWithContext.context.scan; if (exceptionWithContext.context.singleNext) { VerificationContinuationFactory.this.mismatchDetector.scannerNext( - request, exceptionWithContext.context.startingIndex, throwable.getCause()); + request, throwable.getCause()); } else { VerificationContinuationFactory.this.mismatchDetector.scannerNext( - request, - exceptionWithContext.context.startingIndex, - exceptionWithContext.context.numRequests, - throwable.getCause()); + request, exceptionWithContext.context.numRequests, throwable.getCause()); } } else { Log.error( diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/ExecutorServiceRule.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/ExecutorServiceRule.java index 51d67aad8c..ad8929f811 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/ExecutorServiceRule.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/ExecutorServiceRule.java @@ -15,9 +15,10 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x; +import static org.junit.Assert.fail; import static org.mockito.Mockito.spy; -import com.google.common.util.concurrent.ListeningExecutorService; +import com.google.common.base.Preconditions; import com.google.common.util.concurrent.MoreExecutors; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; @@ -33,30 +34,50 @@ private enum Type { public final int numThreads; public final Type type; - public ListeningExecutorService executorService; + public ExecutorService executorService; + public final boolean spyed; - private ExecutorServiceRule(Type type, int numThreads) { + private ExecutorServiceRule(Type type, int numThreads, boolean spyed) { this.type = type; this.numThreads = numThreads; + this.spyed = spyed; } public static ExecutorServiceRule singleThreadedExecutor() { - return new ExecutorServiceRule(Type.Single, 1); + return new ExecutorServiceRule(Type.Single, 1, false); + } + + public static ExecutorServiceRule spyedSingleThreadedExecutor() { + return new ExecutorServiceRule(Type.Single, 1, true); } public static ExecutorServiceRule cachedPoolExecutor() { - return new ExecutorServiceRule(Type.Cached, 0); + return new ExecutorServiceRule(Type.Cached, 0, false); + } + + public static ExecutorServiceRule spyedCachedPoolExecutor() { + return new ExecutorServiceRule(Type.Cached, 0, true); } public static ExecutorServiceRule fixedPoolExecutor(int numThreads) { - assert numThreads > 0; - return new ExecutorServiceRule(Type.Fixed, numThreads); + Preconditions.checkArgument(numThreads > 0); + return new ExecutorServiceRule(Type.Fixed, numThreads, false); + } + + public static ExecutorServiceRule spyedFixedPoolExecutor(int numThreads) { + Preconditions.checkArgument(numThreads > 0); + return new ExecutorServiceRule(Type.Fixed, numThreads, true); } @Override protected void before() throws Throwable { super.before(); - this.executorService = spy(MoreExecutors.listeningDecorator(createExecutor())); + ExecutorService executorService = createExecutor(); + if (this.spyed) { + this.executorService = spy(MoreExecutors.listeningDecorator(executorService)); + } else { + this.executorService = executorService; + } } private ExecutorService createExecutor() { @@ -82,7 +103,9 @@ protected void after() { public void waitForExecutor() { this.executorService.shutdown(); try { - this.executorService.awaitTermination(3, TimeUnit.SECONDS); + if (!this.executorService.awaitTermination(3, TimeUnit.SECONDS)) { + fail("executor did not terminate"); + } } catch (InterruptedException e) { throw new RuntimeException(e); } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestConnection.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestConnection.java new file mode 100644 index 0000000000..f8c2e91bda --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestConnection.java @@ -0,0 +1,120 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ExecutorService; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.BufferedMutator; +import org.apache.hadoop.hbase.client.BufferedMutatorParams; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.RegionLocator; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.security.User; + +public class TestConnection implements Connection { + public static List connectionMocks = new ArrayList<>(); + public static List tableMocks = new ArrayList<>(); + public static List scannerMocks = new ArrayList<>(); + private Connection connectionMock; + + public TestConnection(Configuration conf, boolean managed, ExecutorService pool, User user) { + connectionMock = mock(Connection.class); + connectionMocks.add(connectionMock); + } + + public static void reset() { + connectionMocks.clear(); + tableMocks.clear(); + scannerMocks.clear(); + } + + @Override + public Configuration getConfiguration() { + return connectionMock.getConfiguration(); + } + + @Override + public Table getTable(TableName tableName) throws IOException { + ResultScanner scanner = mock(ResultScanner.class); + doReturn(Result.create(new Cell[0])).when(scanner).next(); + + Table table = mock(Table.class); + doReturn(scanner).when(table).getScanner(any(Scan.class)); + + scannerMocks.add(scanner); + tableMocks.add(table); + return table; + } + + @Override + public Table getTable(TableName tableName, ExecutorService executorService) throws IOException { + return getTable(tableName); + } + + @Override + public BufferedMutator getBufferedMutator(TableName tableName) throws IOException { + return connectionMock.getBufferedMutator(tableName); + } + + @Override + public BufferedMutator getBufferedMutator(BufferedMutatorParams bufferedMutatorParams) + throws IOException { + return connectionMock.getBufferedMutator(bufferedMutatorParams); + } + + @Override + public RegionLocator getRegionLocator(TableName tableName) throws IOException { + return connectionMock.getRegionLocator(tableName); + } + + @Override + public Admin getAdmin() throws IOException { + return connectionMock.getAdmin(); + } + + @Override + public void close() throws IOException { + connectionMock.close(); + } + + @Override + public boolean isClosed() { + return connectionMock.isClosed(); + } + + @Override + public void abort(String s, Throwable throwable) { + connectionMock.abort(s, throwable); + } + + @Override + public boolean isAborted() { + return connectionMock.isAborted(); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestHelpers.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestHelpers.java index f35d7b25e4..8e24dbfaa6 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestHelpers.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestHelpers.java @@ -18,7 +18,6 @@ import static com.google.common.truth.Truth.assertThat; import static org.mockito.ArgumentMatchers.any; import static org.mockito.Mockito.doAnswer; -import static org.mockito.Mockito.doReturn; import static org.mockito.Mockito.lenient; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; @@ -26,14 +25,19 @@ import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.common.base.Preconditions; +import com.google.common.base.Stopwatch; import com.google.common.util.concurrent.SettableFuture; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collection; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.concurrent.Semaphore; import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue.Type; @@ -41,9 +45,13 @@ import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; import org.apache.hadoop.hbase.client.Row; import org.apache.hadoop.hbase.client.Table; import org.mockito.ArgumentMatchers; +import org.mockito.Mockito; +import org.mockito.exceptions.base.MockitoException; +import org.mockito.invocation.Invocation; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; @@ -134,23 +142,85 @@ public static IOException setupFlowControllerToRejectRequests(FlowController flo SettableFuture.create(); resourceReservationFuture.setException(thrownException); - doReturn(resourceReservationFuture) + lenient() + .doReturn(resourceReservationFuture) .when(flowController) .asyncRequestResource(any(RequestResourcesDescription.class)); return thrownException; } + /** + * A helper function that blocks method on a mock until a future is set or default timeout is + * reached. + * + *

Once unblocked by {@link SettableFuture#set} the call is unblocked and let through. + * + *

When timeout is reached a TimeoutException is thrown and blocked method is never called. + * + * @param mock mock whose method is blocked + * @param futureToWaitFor future which unblocks method calls + * @param before action to be called before waiting for {@param futureToWaitFor} + * @param - type of {@param mock} + * @return {@param mock} + */ public static T blockMethodCall( - T table, final SettableFuture secondaryOperationAllowedFuture) { + T mock, final SettableFuture futureToWaitFor, final Runnable before) { return doAnswer( new Answer() { @Override public Object answer(InvocationOnMock invocationOnMock) throws Throwable { - secondaryOperationAllowedFuture.get(10, TimeUnit.SECONDS); - return invocationOnMock.callRealMethod(); + before.run(); + futureToWaitFor.get(10, TimeUnit.SECONDS); + try { + return invocationOnMock.callRealMethod(); + } catch (MockitoException e) { + // there was no real method to call, ignore. + return null; + } } }) - .when(table); + .when(mock); + } + + public static T blockMethodCall( + T mock, final SettableFuture secondaryOperationAllowedFuture) { + return blockMethodCall( + mock, + secondaryOperationAllowedFuture, + new Runnable() { + @Override + public void run() {} + }); + } + + public static T blockMethodCall( + T mock, + final SettableFuture secondaryOperationAllowedFuture, + final SettableFuture startedFuture) { + return blockMethodCall( + mock, + secondaryOperationAllowedFuture, + new Runnable() { + @Override + public void run() { + startedFuture.set(null); + } + }); + } + + public static T blockMethodCall( + T mock, + final SettableFuture secondaryOperationAllowedFuture, + final Semaphore startedSemaphore) { + return blockMethodCall( + mock, + secondaryOperationAllowedFuture, + new Runnable() { + @Override + public void run() { + startedSemaphore.release(); + } + }); } public static SettableFuture blockMethodCall(T methodCall) { @@ -161,12 +231,57 @@ public static SettableFuture blockMethodCall(T methodCall) { @Override public Object answer(InvocationOnMock invocationOnMock) throws Throwable { secondaryOperationAllowedFuture.get(10, TimeUnit.SECONDS); - return invocationOnMock.callRealMethod(); + try { + return invocationOnMock.callRealMethod(); + } catch (MockitoException e) { + // there was no real method to call, ignore. + return null; + } } }); return secondaryOperationAllowedFuture; } + public static T delayMethodCall(T mock, final int ms) { + return doAnswer( + new Answer() { + @Override + public Object answer(InvocationOnMock invocationOnMock) throws Throwable { + Thread.sleep(ms); + try { + return invocationOnMock.callRealMethod(); + } catch (MockitoException e) { + // there was no real method to call, ignore. + return null; + } + } + }) + .when(mock); + } + + public static void waitUntilCalled( + T mockitoObject, String methodName, int calls, int timeoutSeconds) throws TimeoutException { + Stopwatch stopwatch = Stopwatch.createStarted(); + while (stopwatch.elapsed(TimeUnit.SECONDS) < timeoutSeconds) { + Collection invocations = Mockito.mockingDetails(mockitoObject).getInvocations(); + long count = 0; + for (Invocation invocation : invocations) { + if (invocation.getMethod().getName().equals(methodName)) { + count++; + } + } + if (count >= calls) { + return; + } + try { + Thread.sleep(100); + } catch (InterruptedException e) { + throw new RuntimeException(e); + } + } + throw new TimeoutException(); + } + public static void assertPutsAreEqual( Put expectedPut, Put value, CellComparatorCompat cellComparator) { assertThat(expectedPut.getRow()).isEqualTo(value.getRow()); @@ -187,7 +302,7 @@ public interface CellComparatorCompat { } public static Map mapOf(Object... keyValuePairs) { - assert keyValuePairs.length % 2 == 0; + Preconditions.checkArgument(keyValuePairs.length % 2 == 0); Map mapping = new HashMap<>(); for (int i = 0; i < keyValuePairs.length; i += 2) { mapping.put(keyValuePairs[i], keyValuePairs[i + 1]); @@ -210,24 +325,46 @@ public static void mockBatch(Table table, Object... keyValuePairs) .batch(ArgumentMatchers.anyList(), any(Object[].class)); } + /** + * Function used to mock Table.batch(operations, results) call by filling the result Array. + * + *

For objects in {@param keyValuePairs} returns a provided value, otherwise constructs a + * default one. + * + *

Throws iff any of the values returned to caller of batch is a Throwable. + * + * @param keyValuePairs - key:value pairs of objects, key may be either operation or operation + * class + * @return {@link Answer} for use in {@link org.mockito.stubbing.BaseStubber#doAnswer(Answer)} + */ public static Answer createMockBatchAnswer(final Object... keyValuePairs) { final Map mapping = mapOf(keyValuePairs); return new Answer() { @Override public Void answer(InvocationOnMock invocationOnMock) throws Throwable { - boolean shouldThrow = false; Object[] args = invocationOnMock.getArguments(); List operations = (List) args[0]; Object[] result = (Object[]) args[1]; + List exceptions = new ArrayList<>(); + List failedOps = new ArrayList<>(); + List hostnameAndPorts = new ArrayList<>(); + for (int i = 0; i < operations.size(); i++) { Row operation = operations.get(i); - if (mapping.containsKey(operation)) { - Object value = mapping.get(operation); + if (mapping.containsKey(operation) || mapping.containsKey(operation.getClass())) { + Object value; + if (mapping.containsKey(operation)) { + value = mapping.get(operation); + } else { + value = mapping.get(operation.getClass()); + } result[i] = value; if (value instanceof Throwable) { - shouldThrow = true; + failedOps.add(operation); + exceptions.add((Throwable) value); + hostnameAndPorts.add("test:1"); } } else if (operation instanceof Get) { Get get = (Get) operation; @@ -236,8 +373,8 @@ public Void answer(InvocationOnMock invocationOnMock) throws Throwable { result[i] = Result.create(new Cell[0]); } } - if (shouldThrow) { - throw new IOException(); + if (!failedOps.isEmpty()) { + throw new RetriesExhaustedWithDetailsException(exceptions, failedOps, hostnameAndPorts); } return null; } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringBufferedMutator.java deleted file mode 100644 index 9d75d10242..0000000000 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringBufferedMutator.java +++ /dev/null @@ -1,488 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package com.google.cloud.bigtable.mirroring.hbase1_x; - -import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerMock; -import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH; -import static com.google.common.truth.Truth.assertThat; -import static org.junit.Assert.fail; -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.ArgumentMatchers.eq; -import static org.mockito.Mockito.atLeastOnce; -import static org.mockito.Mockito.doAnswer; -import static org.mockito.Mockito.doReturn; -import static org.mockito.Mockito.never; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; - -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; -import com.google.common.primitives.Longs; -import com.google.common.util.concurrent.SettableFuture; -import java.io.IOException; -import java.util.Arrays; -import java.util.List; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; -import java.util.concurrent.atomic.AtomicInteger; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.client.BufferedMutator; -import org.apache.hadoop.hbase.client.BufferedMutatorParams; -import org.apache.hadoop.hbase.client.Connection; -import org.apache.hadoop.hbase.client.Delete; -import org.apache.hadoop.hbase.client.Mutation; -import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; -import org.apache.hadoop.hbase.client.Row; -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.JUnit4; -import org.mockito.ArgumentCaptor; -import org.mockito.ArgumentMatchers; -import org.mockito.Mock; -import org.mockito.invocation.InvocationOnMock; -import org.mockito.junit.MockitoJUnit; -import org.mockito.junit.MockitoRule; -import org.mockito.stubbing.Answer; - -@RunWith(JUnit4.class) -public class TestMirroringBufferedMutator { - @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); - - @Rule - public final ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); - - @Mock BufferedMutator primaryBufferedMutator; - @Mock BufferedMutator secondaryBufferedMutator; - - @Mock Connection primaryConnection; - @Mock Connection secondaryConnection; - @Mock FlowController flowController; - @Mock ResourceReservation resourceReservation; - @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumerWithMetrics; - - BufferedMutatorParams bufferedMutatorParams = - new BufferedMutatorParams(TableName.valueOf("test1")); - - ArgumentCaptor primaryBufferedMutatorParamsCaptor; - ArgumentCaptor secondaryBufferedMutatorParamsCaptor; - - @Before - public void setUp() throws IOException { - this.primaryBufferedMutatorParamsCaptor = ArgumentCaptor.forClass(BufferedMutatorParams.class); - doReturn(primaryBufferedMutator) - .when(primaryConnection) - .getBufferedMutator(primaryBufferedMutatorParamsCaptor.capture()); - - this.secondaryBufferedMutatorParamsCaptor = - ArgumentCaptor.forClass(BufferedMutatorParams.class); - doReturn(secondaryBufferedMutator) - .when(secondaryConnection) - .getBufferedMutator(secondaryBufferedMutatorParamsCaptor.capture()); - - resourceReservation = setupFlowControllerMock(flowController); - } - - @Test - public void testBufferedWritesWithoutErrors() throws IOException, InterruptedException { - Mutation mutation = new Delete("key1".getBytes()); - long mutationSize = mutation.heapSize(); - - BufferedMutator bm = getBufferedMutator((long) (mutationSize * 3.5)); - - bm.mutate(mutation); - verify(primaryBufferedMutator, times(1)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(any(Mutation.class)); - bm.mutate(mutation); - verify(primaryBufferedMutator, times(2)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(any(Mutation.class)); - bm.mutate(mutation); - verify(primaryBufferedMutator, times(3)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(any(Mutation.class)); - bm.mutate(mutation); - Thread.sleep(300); - executorServiceRule.waitForExecutor(); - verify(primaryBufferedMutator, times(4)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, times(1)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(any(Mutation.class)); - verify(secondaryBufferedMutator, times(1)).flush(); - verify(resourceReservation, times(4)).release(); - } - - @Test - public void testBufferedMutatorFlush() throws IOException { - Mutation mutation = new Delete("key1".getBytes()); - long mutationSize = mutation.heapSize(); - - BufferedMutator bm = getBufferedMutator((long) (mutationSize * 3.5)); - - bm.mutate(mutation); - bm.mutate(mutation); - bm.mutate(mutation); - bm.flush(); - executorServiceRule.waitForExecutor(); - verify(primaryBufferedMutator, times(3)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, times(1)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, never()).mutate(any(Mutation.class)); - verify(secondaryBufferedMutator, times(1)).flush(); - verify(resourceReservation, times(3)).release(); - } - - @Test - public void testCloseFlushesWrites() throws IOException { - Mutation mutation = new Delete("key1".getBytes()); - long mutationSize = mutation.heapSize(); - - BufferedMutator bm = getBufferedMutator((long) (mutationSize * 3.5)); - - bm.mutate(mutation); - bm.mutate(mutation); - bm.mutate(mutation); - bm.close(); - verify(primaryBufferedMutator, times(3)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, times(1)).mutate(ArgumentMatchers.anyList()); - verify(secondaryBufferedMutator, times(1)).flush(); - verify(resourceReservation, times(3)).release(); - } - - @Test - public void testCloseIsIdempotent() throws IOException { - Mutation mutation = new Delete("key1".getBytes()); - long mutationSize = mutation.heapSize(); - - BufferedMutator bm = getBufferedMutator((long) (mutationSize * 3.5)); - - bm.mutate(mutation); - bm.mutate(mutation); - bm.mutate(mutation); - bm.close(); - bm.close(); - verify(secondaryBufferedMutator, times(1)).flush(); - verify(resourceReservation, times(3)).release(); - } - - @Test - public void testFlushesCanBeScheduledSimultaneously() - throws IOException, InterruptedException, TimeoutException, ExecutionException { - Mutation mutation = new Delete("key1".getBytes()); - long mutationSize = mutation.heapSize(); - - final AtomicInteger ongoingFlushes = new AtomicInteger(0); - final SettableFuture allFlushesStarted = SettableFuture.create(); - final SettableFuture endFlush = SettableFuture.create(); - - doAnswer(blockedFlushes(ongoingFlushes, allFlushesStarted, endFlush, 4)) - .when(primaryBufferedMutator) - .flush(); - - BufferedMutator bm = getBufferedMutator((long) (mutationSize * 1.5)); - - bm.mutate(mutation); - bm.mutate(mutation); - - bm.mutate(mutation); - bm.mutate(mutation); - - bm.mutate(mutation); - bm.mutate(mutation); - - bm.mutate(mutation); - bm.mutate(mutation); - - allFlushesStarted.get(3, TimeUnit.SECONDS); - assertThat(ongoingFlushes.get()).isEqualTo(4); - endFlush.set(null); - executorServiceRule.waitForExecutor(); - verify(secondaryBufferedMutator, times(4)).mutate(ArgumentMatchers.anyList()); - verify(resourceReservation, times(8)).release(); - } - - @Test - public void testErrorsReportedByPrimaryAreNotUsedBySecondary() throws IOException { - final Mutation mutation1 = new Delete("key1".getBytes()); - final Mutation mutation2 = new Delete("key2".getBytes()); - final Mutation mutation3 = new Delete("key3".getBytes()); - final Mutation mutation4 = new Delete("key4".getBytes()); - - long mutationSize = mutation1.heapSize(); - - doAnswer( - mutateWithErrors( - this.primaryBufferedMutatorParamsCaptor, - primaryBufferedMutator, - mutation1, - mutation3)) - .when(primaryBufferedMutator) - .mutate(ArgumentMatchers.anyList()); - - BufferedMutator bm = getBufferedMutator((long) (mutationSize * 3.5)); - - bm.mutate(mutation1); - bm.mutate(mutation2); - bm.mutate(mutation3); - bm.mutate(mutation4); - executorServiceRule.waitForExecutor(); - verify(secondaryBufferedMutator, times(1)).mutate(Arrays.asList(mutation2, mutation4)); - } - - @Test - public void testErrorsReportedBySecondaryAreReportedAsWriteErrors() throws IOException { - final Mutation mutation1 = new Delete("key1".getBytes()); - final Mutation mutation2 = new Delete("key2".getBytes()); - final Mutation mutation3 = new Delete("key3".getBytes()); - final Mutation mutation4 = new Delete("key4".getBytes()); - - long mutationSize = mutation1.heapSize(); - - doAnswer( - mutateWithErrors( - this.secondaryBufferedMutatorParamsCaptor, - secondaryBufferedMutator, - mutation1, - mutation3)) - .when(secondaryBufferedMutator) - .mutate(ArgumentMatchers.anyList()); - - MirroringBufferedMutator bm = getBufferedMutator((long) (mutationSize * 3.5)); - - bm.mutate(Arrays.asList(mutation1, mutation2, mutation3, mutation4)); - executorServiceRule.waitForExecutor(); - verify(secondaryBufferedMutator, times(1)) - .mutate(Arrays.asList(mutation1, mutation2, mutation3, mutation4)); - - verify(secondaryWriteErrorConsumerWithMetrics, atLeastOnce()) - .consume( - eq(HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST), eq(mutation1), any(Throwable.class)); - verify(secondaryWriteErrorConsumerWithMetrics, atLeastOnce()) - .consume( - eq(HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST), eq(mutation3), any(Throwable.class)); - } - - @Test - public void testSecondaryErrorsDuringSimultaneousFlushes() - throws IOException, InterruptedException, ExecutionException, TimeoutException { - final Mutation mutation1 = new Delete("key1".getBytes()); - final Mutation mutation2 = new Delete("key2".getBytes()); - final Mutation mutation3 = new Delete("key3".getBytes()); - final Mutation mutation4 = new Delete("key4".getBytes()); - - long mutationSize = mutation1.heapSize(); - - final AtomicInteger ongoingFlushes = new AtomicInteger(0); - final SettableFuture allFlushesStarted = SettableFuture.create(); - final SettableFuture endFlush = SettableFuture.create(); - - doAnswer(blockedFlushes(ongoingFlushes, allFlushesStarted, endFlush, 2)) - .when(primaryBufferedMutator) - .flush(); - - doAnswer( - mutateWithErrors( - this.secondaryBufferedMutatorParamsCaptor, - secondaryBufferedMutator, - mutation1, - mutation3)) - .when(secondaryBufferedMutator) - .mutate(ArgumentMatchers.anyList()); - - MirroringBufferedMutator bm = getBufferedMutator((long) (mutationSize * 1.5)); - - bm.mutate(Arrays.asList(mutation1, mutation2)); - bm.mutate(Arrays.asList(mutation3, mutation4)); - allFlushesStarted.get(3, TimeUnit.SECONDS); - - endFlush.set(null); - - executorServiceRule.waitForExecutor(); - verify(secondaryBufferedMutator, atLeastOnce()).mutate(Arrays.asList(mutation1, mutation2)); - verify(secondaryBufferedMutator, atLeastOnce()).mutate(Arrays.asList(mutation3, mutation4)); - - verify(secondaryWriteErrorConsumerWithMetrics, atLeastOnce()) - .consume( - eq(HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST), eq(mutation1), any(Throwable.class)); - verify(secondaryWriteErrorConsumerWithMetrics, atLeastOnce()) - .consume( - eq(HBaseOperation.BUFFERED_MUTATOR_MUTATE_LIST), eq(mutation3), any(Throwable.class)); - } - - @Test - public void testPrimaryAsyncFlushExceptionIsReportedOnNextMutateCall() - throws IOException, InterruptedException, ExecutionException, TimeoutException { - final Mutation[] mutations = - new Mutation[] { - new Delete(Longs.toByteArray(0)), - new Delete(Longs.toByteArray(1)), - new Delete(Longs.toByteArray(2)) - }; - - final SettableFuture flushesStarted = SettableFuture.create(); - final SettableFuture performFlush = SettableFuture.create(); - final AtomicInteger runningFlushes = new AtomicInteger(3); - - doAnswer( - new Answer() { - @Override - public Object answer(InvocationOnMock invocationOnMock) throws Throwable { - int value = runningFlushes.decrementAndGet(); - if (value == 0) { - flushesStarted.set(null); - } - performFlush.get(); - - long id = Longs.fromByteArray(mutations[value].getRow()); - RetriesExhaustedWithDetailsException e = - new RetriesExhaustedWithDetailsException( - Arrays.asList((Throwable) new IOException(String.valueOf(id))), - Arrays.asList((Row) mutations[value]), - Arrays.asList("localhost:" + value)); - primaryBufferedMutatorParamsCaptor - .getValue() - .getListener() - .onException(e, primaryBufferedMutator); - return null; - } - }) - .when(primaryBufferedMutator) - .flush(); - - final BufferedMutator bm = getBufferedMutator(1); - - bm.mutate(mutations[2]); - // Wait until flush is started to ensure to ensure that flushes are scheduled in the same order - // as mutations. - while (runningFlushes.get() == 3) { - Thread.sleep(100); - } - bm.mutate(mutations[1]); - while (runningFlushes.get() == 2) { - Thread.sleep(100); - } - bm.mutate(mutations[0]); - while (runningFlushes.get() == 1) { - Thread.sleep(100); - } - flushesStarted.get(1, TimeUnit.SECONDS); - performFlush.set(null); - - executorServiceRule.waitForExecutor(); - - verify(secondaryBufferedMutator, never()).flush(); - verify(resourceReservation, times(3)).release(); - - // We have killed the executor, mock next submits. - doAnswer( - new Answer() { - @Override - public Object answer(InvocationOnMock invocationOnMock) throws Throwable { - return SettableFuture.create(); - } - }) - .when(executorServiceRule.executorService) - .submit(any(Callable.class)); - - try { - bm.mutate(mutations[0]); - verify(executorServiceRule.executorService, times(1)).submit(any(Callable.class)); - fail("Should have thrown"); - } catch (RetriesExhaustedWithDetailsException e) { - assertThat(e.getNumExceptions()).isEqualTo(3); - assertThat(Arrays.asList(e.getRow(0), e.getRow(1), e.getRow(2))) - .containsExactly(mutations[0], mutations[1], mutations[2]); - for (int i = 0; i < 3; i++) { - Row r = e.getRow(i); - long id = Longs.fromByteArray(r.getRow()); - assertThat(e.getCause(i).getMessage()).isEqualTo(String.valueOf(id)); - assertThat(e.getHostnamePort(i)).isEqualTo("localhost:" + id); - } - } - - verify(secondaryBufferedMutator, never()).flush(); - verify(resourceReservation, times(3)).release(); - } - - private Answer blockedFlushes( - final AtomicInteger ongoingFlushes, - final SettableFuture allFlushesStarted, - final SettableFuture endFlush, - final int expectedNumberOfFlushes) { - return new Answer() { - @Override - public Void answer(InvocationOnMock invocationOnMock) throws Throwable { - if (ongoingFlushes.incrementAndGet() == expectedNumberOfFlushes) { - allFlushesStarted.set(null); - } - endFlush.get(); - return null; - } - }; - } - - private Answer mutateWithErrors( - final ArgumentCaptor argumentCaptor, - final BufferedMutator bufferedMutator, - final Mutation... failingMutations) { - return new Answer() { - @Override - public Void answer(InvocationOnMock invocationOnMock) throws Throwable { - List failingMutationsList = Arrays.asList(failingMutations); - List argument = invocationOnMock.getArgument(0); - for (Mutation m : argument) { - if (failingMutationsList.contains(m)) { - argumentCaptor - .getValue() - .getListener() - .onException( - new RetriesExhaustedWithDetailsException( - Arrays.asList(new Throwable[] {new IOException()}), - Arrays.asList(new Row[] {m}), - Arrays.asList("invalid.example:1234")), - bufferedMutator); - } - } - return null; - } - }; - } - - private MirroringBufferedMutator getBufferedMutator(long flushThreshold) throws IOException { - return new MirroringBufferedMutator( - primaryConnection, - secondaryConnection, - bufferedMutatorParams, - makeConfigurationWithFlushThreshold(flushThreshold), - flowController, - executorServiceRule.executorService, - secondaryWriteErrorConsumerWithMetrics, - new MirroringTracer()); - } - - private MirroringConfiguration makeConfigurationWithFlushThreshold(long flushThreshold) { - Configuration mirroringConfig = new Configuration(); - mirroringConfig.set(MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH, String.valueOf(flushThreshold)); - - return new MirroringConfiguration(new Configuration(), new Configuration(), mirroringConfig); - } -} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConfiguration.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConfiguration.java index 7e5a4a75a2..01ec63fffe 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConfiguration.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConfiguration.java @@ -19,6 +19,8 @@ import static org.junit.Assert.assertThrows; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowControlStrategy; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestCountingFlowControlStrategy; import org.apache.hadoop.conf.Configuration; import org.junit.Test; import org.junit.function.ThrowingRunnable; @@ -177,12 +179,21 @@ public void testMirroringOptionsAreRead() { TestConnection.class.getCanonicalName()); testConfiguration.set( MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "test"); + + try { + new RequestCountingFlowControlStrategy(null); + } catch (NullPointerException ignored) { + + } + testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_CLASS, "test-1"); + MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS, + RequestCountingFlowControlStrategy.Factory.class.getName()); MirroringConfiguration configuration = new MirroringConfiguration(testConfiguration); - assertThat(configuration.mirroringOptions.flowControllerStrategyClass).isEqualTo("test-1"); + assertThat(configuration.mirroringOptions.flowControllerStrategyFactoryClass) + .isEqualTo(RequestCountingFlowControlStrategy.Factory.class); } @Test @@ -216,13 +227,10 @@ public void testDefaultImplClass() { .isEqualTo(null); } - @Test - public void testCopyConstructorSetsImplClasses() { - Configuration empty = new Configuration(false); - MirroringConfiguration emptyMirroringConfiguration = - new MirroringConfiguration(empty, empty, empty); - MirroringConfiguration configuration = new MirroringConfiguration(emptyMirroringConfiguration); - assertThat(configuration.get("hbase.client.connection.impl")) - .isEqualTo(MirroringConnection.class.getCanonicalName()); + public static class TestFactory implements FlowControlStrategy.Factory { + @Override + public FlowControlStrategy create(MirroringOptions options) throws Throwable { + return null; + } } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnection.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnection.java index 14e898d9dc..72c58b847b 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnection.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnection.java @@ -20,96 +20,29 @@ import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY; import static com.google.common.truth.Truth.assertThat; -import static org.mockito.Mockito.mock; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.ExecutorService; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.client.Admin; -import org.apache.hadoop.hbase.client.BufferedMutator; -import org.apache.hadoop.hbase.client.BufferedMutatorParams; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; -import org.apache.hadoop.hbase.client.RegionLocator; -import org.apache.hadoop.hbase.client.Table; -import org.apache.hadoop.hbase.security.User; +import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; -class TestConnection implements Connection { - public static List mocks = new ArrayList<>(); - private Connection connectionMock; - - public TestConnection(Configuration conf, boolean managed, ExecutorService pool, User user) { - connectionMock = mock(Connection.class); - mocks.add(connectionMock); - } - - @Override - public Configuration getConfiguration() { - return connectionMock.getConfiguration(); - } - - @Override - public Table getTable(TableName tableName) throws IOException { - return connectionMock.getTable(tableName); - } - - @Override - public Table getTable(TableName tableName, ExecutorService executorService) throws IOException { - return connectionMock.getTable(tableName, executorService); - } - - @Override - public BufferedMutator getBufferedMutator(TableName tableName) throws IOException { - return connectionMock.getBufferedMutator(tableName); - } - - @Override - public BufferedMutator getBufferedMutator(BufferedMutatorParams bufferedMutatorParams) - throws IOException { - return connectionMock.getBufferedMutator(bufferedMutatorParams); - } - - @Override - public RegionLocator getRegionLocator(TableName tableName) throws IOException { - return connectionMock.getRegionLocator(tableName); - } - - @Override - public Admin getAdmin() throws IOException { - return connectionMock.getAdmin(); - } - - @Override - public void close() throws IOException { - connectionMock.close(); - } - - @Override - public boolean isClosed() { - return connectionMock.isClosed(); - } - - @Override - public void abort(String s, Throwable throwable) { - connectionMock.abort(s, throwable); - } - - @Override - public boolean isAborted() { - return connectionMock.isAborted(); - } -} - @RunWith(JUnit4.class) public class TestMirroringConnection { + private Connection connection; + + @Before + public void setUp() throws IOException { + TestConnection.reset(); + Configuration configuration = createConfiguration(); + connection = ConnectionFactory.createConnection(configuration); + assertThat(TestConnection.connectionMocks.size()).isEqualTo(2); + } private Configuration createConfiguration() { Configuration configuration = new Configuration(); @@ -118,8 +51,10 @@ private Configuration createConfiguration() { MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, TestConnection.class.getCanonicalName()); configuration.set( MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, TestConnection.class.getCanonicalName()); - configuration.set(MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, "1"); - configuration.set(MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, "2"); + // Prefix keys have to be set because we are using the same class as primary and secondary + // connection class. + configuration.set(MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, "primary-connection"); + configuration.set(MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, "secondary-connection"); configuration.set( "google.bigtable.mirroring.write-error-log.appender.prefix-path", "/tmp/test-"); configuration.set("google.bigtable.mirroring.write-error-log.appender.max-buffer-size", "1024"); @@ -130,8 +65,6 @@ private Configuration createConfiguration() { @Test public void testConnectionFactoryCreatesMirroringConnection() throws IOException { - Configuration configuration = createConfiguration(); - Connection connection = ConnectionFactory.createConnection(configuration); assertThat(connection).isInstanceOf(MirroringConnection.class); assertThat(((MirroringConnection) connection).getPrimaryConnection()) .isInstanceOf(TestConnection.class); @@ -141,28 +74,25 @@ public void testConnectionFactoryCreatesMirroringConnection() throws IOException @Test public void testCloseClosesUnderlyingConnections() throws IOException { - TestConnection.mocks.clear(); - Configuration configuration = createConfiguration(); - Connection connection = ConnectionFactory.createConnection(configuration); - - assertThat(TestConnection.mocks.size()).isEqualTo(2); connection.close(); assertThat(connection.isClosed()).isTrue(); - verify(TestConnection.mocks.get(0), times(1)).close(); - verify(TestConnection.mocks.get(1), times(1)).close(); + verify(TestConnection.connectionMocks.get(0), times(1)).close(); + verify(TestConnection.connectionMocks.get(1), times(1)).close(); } @Test public void testAbortAbortsUnderlyingConnections() throws IOException { - TestConnection.mocks.clear(); - Configuration configuration = createConfiguration(); - Connection connection = ConnectionFactory.createConnection(configuration); - - assertThat(TestConnection.mocks.size()).isEqualTo(2); String expectedString = "expected"; Throwable expectedThrowable = new Exception(); connection.abort(expectedString, expectedThrowable); - verify(TestConnection.mocks.get(0), times(1)).abort(expectedString, expectedThrowable); - verify(TestConnection.mocks.get(1), times(1)).abort(expectedString, expectedThrowable); + verify(TestConnection.connectionMocks.get(0), times(1)) + .abort(expectedString, expectedThrowable); + verify(TestConnection.connectionMocks.get(1), times(1)) + .abort(expectedString, expectedThrowable); + } + + @Test + public void testConstructorTakingMirroringConfiguration() throws IOException { + new MirroringConnection(new MirroringConfiguration(createConfiguration()), null); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnectionClosing.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnectionClosing.java new file mode 100644 index 0000000000..e1415fbdf7 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringConnectionClosing.java @@ -0,0 +1,262 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_PRIMARY_CONFIG_PREFIX_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY; +import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.fail; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +import com.google.common.util.concurrent.SettableFuture; +import java.io.IOException; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.Scan; +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.mockito.InOrder; +import org.mockito.Mockito; + +@RunWith(JUnit4.class) +public class TestMirroringConnectionClosing { + @Rule + public final ExecutorServiceRule executorServiceRule = + ExecutorServiceRule.spyedSingleThreadedExecutor(); + + private Configuration createConfiguration() { + Configuration configuration = new Configuration(); + configuration.set("hbase.client.connection.impl", MirroringConnection.class.getCanonicalName()); + configuration.set( + MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, TestConnection.class.getCanonicalName()); + configuration.set( + MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, TestConnection.class.getCanonicalName()); + // Prefix keys have to be set because we are using the same class as primary and secondary + // connection class. + configuration.set(MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, "primary-connection"); + configuration.set(MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, "secondary-connection"); + configuration.set( + "google.bigtable.mirroring.write-error-log.appender.prefix-path", "/tmp/test-"); + configuration.set("google.bigtable.mirroring.write-error-log.appender.max-buffer-size", "1024"); + configuration.set( + "google.bigtable.mirroring.write-error-log.appender.drop-on-overflow", "false"); + return configuration; + } + + MirroringConnection mirroringConnection; + MirroringTable mirroringTable; + MirroringResultScanner mirroringScanner; + + @Before + public void setUp() throws IOException { + TestConnection.reset(); + Configuration configuration = createConfiguration(); + + mirroringConnection = + spy( + (MirroringConnection) + ConnectionFactory.createConnection( + configuration, executorServiceRule.executorService)); + assertThat(TestConnection.connectionMocks.size()).isEqualTo(2); + + mirroringTable = (MirroringTable) mirroringConnection.getTable(TableName.valueOf("test")); + mirroringScanner = (MirroringResultScanner) mirroringTable.getScanner(new Scan()); + } + + @Test + public void testUnderlingObjectsAreClosedInCorrectOrder() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + final SettableFuture unblockSecondaryScanner = SettableFuture.create(); + final SettableFuture scannerAndTableClosed = SettableFuture.create(); + final SettableFuture closeFinished = SettableFuture.create(); + TestHelpers.blockMethodCall(TestConnection.scannerMocks.get(1), unblockSecondaryScanner).next(); + + // We expect secondary objects to be closed in correct order - from the innermost to the + // outermost. + // TestConnection object is created for each both primary and secondary, that connections, + // tables and scanners created using those connections are stored in static *Mocks field of + // TestConnection, in order of creation. + // Thus, `TestConnection.connectionMocks.get(1)` is secondary connection mock, + // `TestConnection.tableMocks.get(1)` is a table created using this connection, + // and `TestConnection.scannerMocks.get(1)` is a scanner created using this table. + InOrder inOrder = + Mockito.inOrder( + TestConnection.scannerMocks.get(1), + TestConnection.tableMocks.get(1), + TestConnection.connectionMocks.get(1)); + + Thread t = + new Thread( + new Runnable() { + @Override + public void run() { + try { + mirroringScanner.next(); + + mirroringScanner.close(); + mirroringTable.close(); + + scannerAndTableClosed.set(null); + mirroringConnection.close(); + closeFinished.set(null); + } catch (Exception e) { + closeFinished.setException(e); + } + } + }); + t.start(); + + // Wait until secondary request is scheduled. + scannerAndTableClosed.get(5, TimeUnit.SECONDS); + // Give mirroringConnection.close() some time to run + Thread.sleep(3000); + // and verify that it was called. + verify(mirroringConnection).close(); + + // Finish async call. + unblockSecondaryScanner.set(null); + // The close() should finish. + closeFinished.get(5, TimeUnit.SECONDS); + t.join(); + + executorServiceRule.waitForExecutor(); + + inOrder.verify(TestConnection.scannerMocks.get(1)).close(); + inOrder.verify(TestConnection.tableMocks.get(1)).close(); + inOrder.verify(TestConnection.connectionMocks.get(1)).close(); + + assertThat(mirroringConnection.isClosed()).isTrue(); + verify(TestConnection.connectionMocks.get(0), times(1)).close(); + verify(TestConnection.tableMocks.get(0), times(1)).close(); + verify(TestConnection.scannerMocks.get(0), times(1)).close(); + } + + @Test(timeout = 5000) + public void testClosingConnectionWithoutClosingUnderlyingObjectsShouldntBlock() + throws IOException { + // We have created a connection, table and scanner. + // They are not use asynchronously now, thus connection should be closed without delay. + mirroringConnection.close(); + verify(TestConnection.connectionMocks.get(0)).close(); + verify(TestConnection.connectionMocks.get(1)).close(); + } + + @Test + public void testInFlightRequestBlockClosingConnection() + throws IOException, InterruptedException, TimeoutException, ExecutionException { + final SettableFuture unblockSecondaryScanner = SettableFuture.create(); + final SettableFuture asyncScheduled = SettableFuture.create(); + final SettableFuture closeFinished = SettableFuture.create(); + TestHelpers.blockMethodCall(TestConnection.scannerMocks.get(1), unblockSecondaryScanner).next(); + + Thread t = + new Thread( + new Runnable() { + @Override + public void run() { + try { + mirroringScanner.next(); + + // Not calling close on scanner nor on table. + asyncScheduled.set(null); + + mirroringConnection.close(); + closeFinished.set(null); + } catch (Exception e) { + closeFinished.setException(e); + } + } + }); + t.start(); + + // Wait until secondary request is scheduled. + asyncScheduled.get(5, TimeUnit.SECONDS); + // Give mirroringConnection.close() some time to run + Thread.sleep(3000); + // and verify that it was called. + verify(mirroringConnection).close(); + + // Finish async call. + unblockSecondaryScanner.set(null); + // The close() should finish even though we didn't close scanner nor table. + closeFinished.get(5, TimeUnit.SECONDS); + t.join(); + } + + @Test + public void testConnectionWaitsForAsynchronousClose() + throws IOException, InterruptedException, TimeoutException, ExecutionException { + final SettableFuture unblockScannerNext = SettableFuture.create(); + final SettableFuture unblockScannerClose = SettableFuture.create(); + final SettableFuture asyncScheduled = SettableFuture.create(); + final SettableFuture closeFinished = SettableFuture.create(); + TestHelpers.blockMethodCall(TestConnection.scannerMocks.get(1), unblockScannerNext).next(); + TestHelpers.blockMethodCall(TestConnection.scannerMocks.get(1), unblockScannerClose).close(); + + Thread t = + new Thread( + new Runnable() { + @Override + public void run() { + try { + mirroringScanner.next(); + mirroringScanner.close(); + asyncScheduled.set(null); + mirroringConnection.close(); + closeFinished.set(null); + } catch (Exception e) { + closeFinished.setException(e); + } + } + }); + t.start(); + + // Wait until secondary request is scheduled. + asyncScheduled.get(5, TimeUnit.SECONDS); + // Unblock scanner next. + unblockScannerNext.set(null); + // Give mirroringConnection.close() and secondaryScanner.close() some time to run + Thread.sleep(1000); + // and verify that they were called. + verify(mirroringConnection).close(); + verify(TestConnection.scannerMocks.get(1)).close(); + + // secondary.close() was not yet finished, close should be blocked. + try { + closeFinished.get(2, TimeUnit.SECONDS); + fail(); + } catch (TimeoutException expected) { + // async operation has not finished - close should block. + } + + // Finish secondaryScanner.close(). + unblockScannerClose.set(null); + // And now connection.close() should unblock. + closeFinished.get(5, TimeUnit.SECONDS); + t.join(); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringMetrics.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringMetrics.java index fb6f8c2296..6e9bdfe615 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringMetrics.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringMetrics.java @@ -20,11 +20,13 @@ import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createResult; import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.mockBatch; import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerMock; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.FLOW_CONTROL_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.MIRRORING_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.PRIMARY_ERRORS; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.PRIMARY_LATENCY; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_ERRORS; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_LATENCY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.SECONDARY_WRITE_ERROR_HANDLER_LATENCY; import static org.junit.Assert.fail; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyBoolean; @@ -39,14 +41,21 @@ import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.DefaultSecondaryWriteErrorConsumer; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Appender; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.FailedMutationLogger; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Serializer; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringMetricsRecorder; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanFactory; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.DefaultMismatchDetector; import io.opencensus.trace.Tracing; import java.io.IOException; @@ -61,6 +70,7 @@ import org.apache.hadoop.hbase.client.Row; import org.apache.hadoop.hbase.client.RowMutations; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.junit.Before; import org.junit.Rule; import org.junit.Test; @@ -84,6 +94,7 @@ public class TestMirroringMetrics { @Mock Table primaryTable; @Mock Table secondaryTable; @Mock FlowController flowController; + Timestamper timestamper = new NoopTimestamper(); @Mock MirroringMetricsRecorder mirroringMetricsRecorder; @@ -101,13 +112,19 @@ public void setUp() { primaryTable, secondaryTable, this.executorServiceRule.executorService, - new DefaultMismatchDetector(tracer), + new DefaultMismatchDetector(tracer, 32), flowController, new SecondaryWriteErrorConsumerWithMetrics( - tracer, mock(SecondaryWriteErrorConsumer.class)), + tracer, + new DefaultSecondaryWriteErrorConsumer( + new FailedMutationLogger(mock(Appender.class), mock(Serializer.class)))), new ReadSampler(100), + this.timestamper, false, - tracer)); + false, + tracer, + mock(ReferenceCounter.class), + 10)); } @Test @@ -136,10 +153,10 @@ public void testOperationLatenciesAreRecorded() throws IOException { verify(mirroringMetricsRecorder, times(1)) .recordOperation(eq(HBaseOperation.GET), eq(MIRRORING_LATENCY), anyLong()); + verify(mirroringMetricsRecorder, times(1)) + .recordReadMismatches(any(HBaseOperation.class), eq(0)); verify(mirroringMetricsRecorder, never()) - .recordReadMismatches(any(HBaseOperation.class), anyInt()); - verify(mirroringMetricsRecorder, never()) - .recordWriteMismatches(any(HBaseOperation.class), anyInt()); + .recordSecondaryWriteErrors(any(HBaseOperation.class), anyInt()); } @Test @@ -156,7 +173,87 @@ public void testReadMismatchIsRecorded() throws IOException { verify(mirroringMetricsRecorder, times(1)).recordReadMismatches(HBaseOperation.GET, 1); verify(mirroringMetricsRecorder, never()) - .recordWriteMismatches(any(HBaseOperation.class), anyInt()); + .recordSecondaryWriteErrors(any(HBaseOperation.class), anyInt()); + } + + @Test + public void testMatchingReadReportsZeroMismatches() throws IOException { + Get get = createGet("test"); + Result result1 = createResult("test", "value1"); + + when(primaryTable.get(get)).thenReturn(result1); + when(secondaryTable.get(get)).thenReturn(result1); + + mirroringTable.get(get); + executorServiceRule.waitForExecutor(); + + verify(mirroringMetricsRecorder, times(1)).recordReadMismatches(HBaseOperation.GET, 0); + verify(mirroringMetricsRecorder, never()) + .recordSecondaryWriteErrors(any(HBaseOperation.class), anyInt()); + } + + @Test + public void testSuccessfulConditionalWriteReportsZeroFailures() throws IOException { + byte[] row = new byte[] {1}; + byte[] family = new byte[] {2}; + byte[] qualifier = new byte[] {3}; + byte[] value = new byte[] {4}; + + Put put = new Put(row); + put.addColumn(family, qualifier, value); + + RowMutations rm = new RowMutations(row); + rm.add(put); + + when(primaryTable.checkAndMutate(row, family, qualifier, CompareOp.EQUAL, value, rm)) + .thenReturn(true); + + mirroringTable.checkAndPut(row, family, qualifier, CompareOp.EQUAL, value, put); + executorServiceRule.waitForExecutor(); + + verify(mirroringMetricsRecorder, times(1)) + .recordOperation( + eq(HBaseOperation.CHECK_AND_MUTATE), + eq(PRIMARY_LATENCY), + anyLong(), + eq(PRIMARY_ERRORS), + eq(false)); + verify(mirroringMetricsRecorder, times(1)) + .recordOperation( + eq(HBaseOperation.MUTATE_ROW), + eq(SECONDARY_LATENCY), + anyLong(), + eq(SECONDARY_ERRORS), + eq(false)); + verify(mirroringMetricsRecorder, times(1)) + .recordSecondaryWriteErrors(HBaseOperation.MUTATE_ROW, 0); + } + + @Test + public void testSuccessfulWriteReportsZeroFailures() throws IOException, InterruptedException { + Put put = createPut("test", "f1", "q1", "v1"); + + mockBatch(primaryTable, put, new Result()); + mockBatch(secondaryTable, put, new Result()); + + mirroringTable.put(put); + executorServiceRule.waitForExecutor(); + + verify(mirroringMetricsRecorder, times(1)) + .recordOperation( + eq(HBaseOperation.BATCH), + eq(PRIMARY_LATENCY), + anyLong(), + eq(PRIMARY_ERRORS), + eq(false)); + verify(mirroringMetricsRecorder, times(1)) + .recordOperation( + eq(HBaseOperation.BATCH), + eq(SECONDARY_LATENCY), + anyLong(), + eq(SECONDARY_ERRORS), + eq(false)); + verify(mirroringMetricsRecorder, times(1)).recordSecondaryWriteErrors(HBaseOperation.BATCH, 0); } @Test @@ -193,7 +290,7 @@ public void testPrimaryErrorMetricIsRecorded() throws IOException { verify(mirroringMetricsRecorder, never()) .recordReadMismatches(any(HBaseOperation.class), anyInt()); verify(mirroringMetricsRecorder, never()) - .recordWriteMismatches(any(HBaseOperation.class), anyInt()); + .recordSecondaryWriteErrors(any(HBaseOperation.class), anyInt()); } @Test @@ -226,7 +323,7 @@ public void testSecondaryErrorMetricIsRecorded() throws IOException { verify(mirroringMetricsRecorder, never()) .recordReadMismatches(any(HBaseOperation.class), anyInt()); verify(mirroringMetricsRecorder, never()) - .recordWriteMismatches(any(HBaseOperation.class), anyInt()); + .recordSecondaryWriteErrors(any(HBaseOperation.class), anyInt()); } @Test @@ -258,9 +355,13 @@ public void testSingleWriteErrorMetricIsRecorded() throws IOException, Interrupt verify(mirroringMetricsRecorder, times(1)) .recordOperation(eq(HBaseOperation.PUT), eq(MIRRORING_LATENCY), anyLong()); + verify(mirroringMetricsRecorder, times(1)).recordLatency(eq(FLOW_CONTROL_LATENCY), anyLong()); + verify(mirroringMetricsRecorder, times(1)) + .recordLatency(eq(SECONDARY_WRITE_ERROR_HANDLER_LATENCY), anyLong()); + verify(mirroringMetricsRecorder, never()) .recordReadMismatches(any(HBaseOperation.class), anyInt()); - verify(mirroringMetricsRecorder, times(1)).recordWriteMismatches(HBaseOperation.BATCH, 1); + verify(mirroringMetricsRecorder, times(1)).recordSecondaryWriteErrors(HBaseOperation.BATCH, 1); } @Test @@ -304,12 +405,7 @@ public Void answer(InvocationOnMock invocationOnMock) throws Throwable { .when(secondaryTable) .batch(ArgumentMatchers.anyList(), any(Object[].class)); - try { - mirroringTable.put(put); - } catch (RetriesExhaustedWithDetailsException e) { - - } - + mirroringTable.put(put); executorServiceRule.waitForExecutor(); verify(mirroringMetricsRecorder, times(1)) @@ -333,7 +429,7 @@ public Void answer(InvocationOnMock invocationOnMock) throws Throwable { verify(mirroringMetricsRecorder, never()) .recordReadMismatches(any(HBaseOperation.class), anyInt()); - verify(mirroringMetricsRecorder, times(2)).recordWriteMismatches(HBaseOperation.BATCH, 1); + verify(mirroringMetricsRecorder, times(2)).recordSecondaryWriteErrors(HBaseOperation.BATCH, 1); } @Test @@ -353,12 +449,13 @@ public void testWriteErrorConsumerWithMetricsReportsErrors() { secondaryWriteErrorConsumerWithMetrics.consume(HBaseOperation.PUT_LIST, puts, new Throwable()); verify(secondaryWriteErrorConsumer, times(1)) .consume(eq(HBaseOperation.PUT_LIST), eq(puts), any(Throwable.class)); - verify(mirroringMetricsRecorder, times(1)).recordWriteMismatches(HBaseOperation.PUT_LIST, 2); + verify(mirroringMetricsRecorder, times(1)) + .recordSecondaryWriteErrors(HBaseOperation.PUT_LIST, 2); Put put = createPut("r1", "f", "q", "1"); secondaryWriteErrorConsumerWithMetrics.consume(HBaseOperation.PUT, put, new Throwable()); - verify(mirroringMetricsRecorder, times(1)).recordWriteMismatches(HBaseOperation.PUT, 1); + verify(mirroringMetricsRecorder, times(1)).recordSecondaryWriteErrors(HBaseOperation.PUT, 1); verify(secondaryWriteErrorConsumer, times(1)) .consume(eq(HBaseOperation.PUT), eq(put), any(Throwable.class)); @@ -368,6 +465,7 @@ public void testWriteErrorConsumerWithMetricsReportsErrors() { verify(secondaryWriteErrorConsumer, times(1)) .consume(eq(HBaseOperation.MUTATE_ROW), eq(rowMutations), any(Throwable.class)); - verify(mirroringMetricsRecorder, times(1)).recordWriteMismatches(HBaseOperation.MUTATE_ROW, 1); + verify(mirroringMetricsRecorder, times(1)) + .recordSecondaryWriteErrors(HBaseOperation.MUTATE_ROW, 1); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestMirroringResultScanner.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringResultScanner.java similarity index 59% rename from bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestMirroringResultScanner.java rename to bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringResultScanner.java index 045b66936f..ea1813cbfa 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestMirroringResultScanner.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringResultScanner.java @@ -13,33 +13,39 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers; +package com.google.cloud.bigtable.mirroring.hbase1_x; import static com.google.common.truth.Truth.assertThat; import static org.junit.Assert.assertThrows; +import static org.junit.Assert.fail; import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyInt; import static org.mockito.Mockito.doThrow; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; -import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringResultScanner; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable.RequestScheduler; +import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncResultScannerWrapper; import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncResultScannerWrapper.AsyncScannerVerificationPayload; import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncResultScannerWrapper.ScannerRequestContext; +import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncTableWrapper; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.VerificationContinuationFactory; import com.google.common.util.concurrent.FutureCallback; -import com.google.common.util.concurrent.Futures; import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.ListeningExecutorService; import com.google.common.util.concurrent.MoreExecutors; -import com.google.common.util.concurrent.SettableFuture; import io.opencensus.trace.Span; import io.opencensus.trace.Tracing; import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; import java.util.List; import java.util.concurrent.Callable; @@ -50,8 +56,8 @@ import java.util.concurrent.TimeoutException; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.Table; import org.checkerframework.checker.nullness.compatqual.NullableDecl; +import org.junit.Before; import org.junit.Test; import org.junit.function.ThrowingRunnable; import org.junit.runner.RunWith; @@ -61,29 +67,36 @@ @RunWith(JUnit4.class) public class TestMirroringResultScanner { @Mock FlowController flowController; + VerificationContinuationFactory continuationFactoryMock = + mock(VerificationContinuationFactory.class); + + @Before + public void setUp() { + MismatchDetector mismatchDetectorMock = mock(MismatchDetector.class); + when(mismatchDetectorMock.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(mock(MismatchDetector.ScannerResultVerifier.class)); + + when(continuationFactoryMock.getMismatchDetector()).thenReturn(mismatchDetectorMock); + } @Test public void testScannerCloseWhenFirstCloseThrows() throws IOException { ResultScanner primaryScannerMock = mock(ResultScanner.class); - VerificationContinuationFactory continuationFactoryMock = - mock(VerificationContinuationFactory.class); - AsyncResultScannerWrapper secondaryScannerWrapperMock = mock(AsyncResultScannerWrapper.class); - AsyncTableWrapper secondaryAsyncTableWrapperMock = mock(AsyncTableWrapper.class); - when(secondaryAsyncTableWrapperMock.getScanner(any(Scan.class))) - .thenReturn(secondaryScannerWrapperMock); final ResultScanner mirroringScanner = new MirroringResultScanner( new Scan(), primaryScannerMock, - secondaryAsyncTableWrapperMock, + secondaryScannerWrapperMock, continuationFactoryMock, - flowController, new MirroringTracer(), - true); - + true, + new RequestScheduler( + flowController, new MirroringTracer(), mock(ListenableReferenceCounter.class)), + mock(ReferenceCounter.class), + 10); doThrow(new RuntimeException("first")).when(primaryScannerMock).close(); Exception thrown = @@ -97,73 +110,64 @@ public void run() { }); verify(primaryScannerMock, times(1)).close(); - verify(secondaryScannerWrapperMock, times(1)).asyncClose(); + verify(secondaryScannerWrapperMock, times(1)).close(); assertThat(thrown).hasMessageThat().contains("first"); } @Test - public void testScannerCloseWhenSecondCloseThrows() throws IOException { + public void testScannerCloseWhenSecondCloseThrows() + throws TimeoutException, InterruptedException { ResultScanner primaryScannerMock = mock(ResultScanner.class); - VerificationContinuationFactory continuationFactoryMock = - mock(VerificationContinuationFactory.class); - AsyncResultScannerWrapper secondaryScannerWrapperMock = mock(AsyncResultScannerWrapper.class); - AsyncTableWrapper secondaryAsyncTableWrapperMock = mock(AsyncTableWrapper.class); - when(secondaryAsyncTableWrapperMock.getScanner(any(Scan.class))) - .thenReturn(secondaryScannerWrapperMock); - final ResultScanner mirroringScanner = + final MirroringResultScanner mirroringScanner = new MirroringResultScanner( new Scan(), primaryScannerMock, - secondaryAsyncTableWrapperMock, + secondaryScannerWrapperMock, continuationFactoryMock, - flowController, new MirroringTracer(), - true); + true, + new RequestScheduler( + flowController, new MirroringTracer(), mock(ListenableReferenceCounter.class)), + mock(ReferenceCounter.class), + 10); - doThrow(new RuntimeException("second")).when(secondaryScannerWrapperMock).asyncClose(); + doThrow(new RuntimeException("second")).when(secondaryScannerWrapperMock).close(); - Exception thrown = - assertThrows( - RuntimeException.class, - new ThrowingRunnable() { - @Override - public void run() { - mirroringScanner.close(); - } - }); + mirroringScanner.close(); verify(primaryScannerMock, times(1)).close(); - verify(secondaryScannerWrapperMock, times(1)).asyncClose(); - assertThat(thrown).hasMessageThat().contains("second"); + verify(secondaryScannerWrapperMock, times(1)).close(); + try { + mirroringScanner.closePrimaryAndScheduleSecondaryClose().get(3, TimeUnit.SECONDS); + } catch (ExecutionException e) { + assertThat(e).hasCauseThat().hasMessageThat().contains("second"); + } } @Test - public void testScannerCloseWhenBothCloseThrow() throws IOException { + public void testScannerCloseWhenBothCloseThrow() throws InterruptedException, TimeoutException { ResultScanner primaryScannerMock = mock(ResultScanner.class); - VerificationContinuationFactory continuationFactoryMock = - mock(VerificationContinuationFactory.class); - AsyncResultScannerWrapper secondaryScannerWrapperMock = mock(AsyncResultScannerWrapper.class); - AsyncTableWrapper secondaryAsyncTableWrapperMock = mock(AsyncTableWrapper.class); - when(secondaryAsyncTableWrapperMock.getScanner(any(Scan.class))) - .thenReturn(secondaryScannerWrapperMock); - final ResultScanner mirroringScanner = + final MirroringResultScanner mirroringScanner = new MirroringResultScanner( new Scan(), primaryScannerMock, - secondaryAsyncTableWrapperMock, + secondaryScannerWrapperMock, continuationFactoryMock, - flowController, new MirroringTracer(), - true); + true, + new RequestScheduler( + flowController, new MirroringTracer(), mock(ListenableReferenceCounter.class)), + mock(ReferenceCounter.class), + 10); doThrow(new RuntimeException("first")).when(primaryScannerMock).close(); - doThrow(new RuntimeException("second")).when(secondaryScannerWrapperMock).asyncClose(); + doThrow(new RuntimeException("second")).when(secondaryScannerWrapperMock).close(); RuntimeException thrown = assertThrows( @@ -171,112 +175,125 @@ public void testScannerCloseWhenBothCloseThrow() throws IOException { new ThrowingRunnable() { @Override public void run() { - mirroringScanner.close(); + mirroringScanner.closePrimaryAndScheduleSecondaryClose(); } }); + // asyncClose returns future that will resolve to secondary error. + // Second call to closePrimaryAndScheduleSecondaryClose() should perform any other operation. + ListenableFuture asyncCloseResult = + mirroringScanner.closePrimaryAndScheduleSecondaryClose(); + verify(primaryScannerMock, times(1)).close(); - verify(secondaryScannerWrapperMock, times(1)).asyncClose(); assertThat(thrown).hasMessageThat().contains("first"); - assertThat(thrown.getSuppressed()).hasLength(1); - assertThat(thrown.getSuppressed()[0]).hasMessageThat().contains("second"); + try { + asyncCloseResult.get(3, TimeUnit.SECONDS); + fail(); + } catch (ExecutionException e) { + assertThat(e).hasCauseThat().hasMessageThat().contains("second"); + } + + verify(secondaryScannerWrapperMock, times(1)).close(); } @Test public void testMultipleCloseCallsCloseScannersOnlyOnce() throws IOException { ResultScanner primaryScannerMock = mock(ResultScanner.class); - VerificationContinuationFactory continuationFactoryMock = - mock(VerificationContinuationFactory.class); AsyncResultScannerWrapper secondaryScannerWrapperMock = mock(AsyncResultScannerWrapper.class); - SettableFuture closedFuture = SettableFuture.create(); - closedFuture.set(null); - when(secondaryScannerWrapperMock.asyncClose()).thenReturn(closedFuture); - - AsyncTableWrapper secondaryAsyncTableWrapperMock = mock(AsyncTableWrapper.class); - when(secondaryAsyncTableWrapperMock.getScanner(any(Scan.class))) - .thenReturn(secondaryScannerWrapperMock); final ResultScanner mirroringScanner = new MirroringResultScanner( new Scan(), primaryScannerMock, - secondaryAsyncTableWrapperMock, + secondaryScannerWrapperMock, continuationFactoryMock, - flowController, new MirroringTracer(), - true); + true, + new RequestScheduler( + flowController, new MirroringTracer(), mock(ListenableReferenceCounter.class)), + mock(ReferenceCounter.class), + 10); mirroringScanner.close(); mirroringScanner.close(); verify(primaryScannerMock, times(1)).close(); - verify(secondaryScannerWrapperMock, times(1)).asyncClose(); + verify(secondaryScannerWrapperMock, times(1)).close(); } @Test public void testSecondaryNextsAreIssuedInTheSameOrderAsPrimary() throws IOException { + // AsyncRequestWrapper has a concurrent queue of primary scanner results (with some context). + // When a next() is requested, it puts its context into the queue. + // Then it acquires a mutex and while holding it pops a context from the queue + // and then runs next() on underlying ResultScanner from secondary database. + // Later it joins the primary and secondary results in an object further passed to + // MismatchDetector. + // This test proves that even if the asynchronous requests get reordered, the + // queue is emptied in order (so that results of primary and secondary scanner + // are paired as intended). + AsyncResultScannerWrapper secondaryScannerWrapperMock = mock(AsyncResultScannerWrapper.class); AsyncTableWrapper secondaryAsyncTableWrapperMock = mock(AsyncTableWrapper.class); when(secondaryAsyncTableWrapperMock.getScanner(any(Scan.class))) .thenReturn(secondaryScannerWrapperMock); - Table table = mock(Table.class); ResultScanner resultScanner = mock(ResultScanner.class); + // We force reordering of secondary requests. ReverseOrderExecutorService reverseOrderExecutorService = new ReverseOrderExecutorService(); ListeningExecutorService listeningExecutorService = MoreExecutors.listeningDecorator(reverseOrderExecutorService); final AsyncResultScannerWrapper asyncResultScannerWrapper = new AsyncResultScannerWrapper( - table, resultScanner, listeningExecutorService, new MirroringTracer()); + resultScanner, listeningExecutorService, new MirroringTracer()); final List calls = new ArrayList<>(); Span span = Tracing.getTracer().spanBuilder("test").startSpan(); - ScannerRequestContext c1 = new ScannerRequestContext(null, null, 1, span); - ScannerRequestContext c2 = new ScannerRequestContext(null, null, 2, span); - ScannerRequestContext c3 = new ScannerRequestContext(null, null, 3, span); - ScannerRequestContext c4 = new ScannerRequestContext(null, null, 4, span); - ScannerRequestContext c5 = new ScannerRequestContext(null, null, 5, span); - ScannerRequestContext c6 = new ScannerRequestContext(null, null, 6, span); - - catchResult(asyncResultScannerWrapper.next(c1).get(), calls); - catchResult(asyncResultScannerWrapper.next(c2).get(), calls); - catchResult(asyncResultScannerWrapper.next(c3).get(), calls); - catchResult(asyncResultScannerWrapper.next(c4).get(), calls); - catchResult(asyncResultScannerWrapper.next(c5).get(), calls); - catchResult(asyncResultScannerWrapper.next(c6).get(), calls); + List contexts = + Arrays.asList( + new ScannerRequestContext(null, null, 1, span), + new ScannerRequestContext(null, null, 2, span), + new ScannerRequestContext(null, null, 3, span), + new ScannerRequestContext(null, null, 4, span), + new ScannerRequestContext(null, null, 5, span), + new ScannerRequestContext(null, null, 6, span)); + + for (ScannerRequestContext ctx : contexts) { + asyncResultScannerWrapper.next(ctx).get(); + } - reverseOrderExecutorService.callCallables(); + reverseOrderExecutorService.callScheduledCallables(); + verify(resultScanner, times(6)).next(anyInt()); - verify(resultScanner, times(6)).next(); - assertThat(calls).containsExactly(c1, c2, c3, c4, c5, c6); + for (int i = 0; i < contexts.size(); i++) { + assertThat(asyncResultScannerWrapper.nextResultQueue.remove().context) + .isEqualTo(contexts.get(i)); + } } - private void catchResult( - ListenableFuture next, - final List calls) { - Futures.addCallback( - next, - new FutureCallback() { - @Override - public void onSuccess( - @NullableDecl AsyncScannerVerificationPayload asyncScannerVerificationPayload) { - calls.add(asyncScannerVerificationPayload.context); - } - - @Override - public void onFailure(Throwable throwable) {} - }, - MoreExecutors.directExecutor()); + private FutureCallback addContextToListCallback( + final List list) { + return new FutureCallback() { + @Override + public void onSuccess( + @NullableDecl AsyncScannerVerificationPayload asyncScannerVerificationPayload) { + list.add(asyncScannerVerificationPayload.context); + } + + @Override + public void onFailure(Throwable throwable) {} + }; } static class ReverseOrderExecutorService implements ExecutorService { + List callables = new ArrayList<>(); - public void callCallables() { + public void callScheduledCallables() { for (int i = callables.size() - 1; i >= 0; i--) { callables.get(i).run(); } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTable.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTable.java index b9cae3a2a3..4bb6b6eaaf 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTable.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTable.java @@ -40,19 +40,25 @@ import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringMetricsRecorder; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanFactory; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.DefaultMismatchDetector; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector.ScannerResultVerifier; import com.google.common.collect.ImmutableList; import com.google.common.primitives.Longs; -import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.MoreExecutors; import com.google.common.util.concurrent.SettableFuture; +import io.opencensus.trace.Tracing; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; @@ -106,15 +112,24 @@ public class TestMirroringTable { @Mock Table primaryTable; @Mock Table secondaryTable; - @Mock MismatchDetector mismatchDetector; @Mock FlowController flowController; @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; + @Mock ReferenceCounter referenceCounter; + @Mock MirroringMetricsRecorder mirroringMetricsRecorder; + Timestamper timestamper = new NoopTimestamper(); + MismatchDetector mismatchDetector; MirroringTable mirroringTable; + MirroringTracer mirroringTracer; @Before public void setUp() { setupFlowControllerMock(flowController); + this.mirroringTracer = + new MirroringTracer( + new MirroringSpanFactory(Tracing.getTracer(), mirroringMetricsRecorder), + mirroringMetricsRecorder); + this.mismatchDetector = spy(new DefaultMismatchDetector(this.mirroringTracer, 100)); this.mirroringTable = spy( new MirroringTable( @@ -125,13 +140,19 @@ public void setUp() { flowController, secondaryWriteErrorConsumer, new ReadSampler(100), + this.timestamper, false, - new MirroringTracer())); + false, + this.mirroringTracer, + this.referenceCounter, + 1000)); } private void waitForMirroringScanner(ResultScanner mirroringScanner) throws InterruptedException, ExecutionException, TimeoutException { - ((MirroringResultScanner) mirroringScanner).asyncClose().get(3, TimeUnit.SECONDS); + ((MirroringResultScanner) mirroringScanner) + .closePrimaryAndScheduleSecondaryClose() + .get(3, TimeUnit.SECONDS); } @Test @@ -153,6 +174,24 @@ public void testMismatchDetectorIsCalledOnGetSingle() throws IOException { .get(ArgumentMatchers.anyList(), any(Result[].class), any(Result[].class)); } + @Test + public void testPrimaryReadExceptionDoesntCallSecondaryNorVerification() throws IOException { + Get request = createGet("test"); + IOException expectedException = new IOException("expected"); + when(primaryTable.get(request)).thenThrow(expectedException); + + try { + mirroringTable.get(request); + fail("should have thrown"); + } catch (IOException e) { + assertThat(e).isEqualTo(expectedException); + } + executorServiceRule.waitForExecutor(); + + verify(secondaryTable, never()).get(any(Get.class)); + verify(mismatchDetector, never()).get(request, expectedException); + } + @Test public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnSingleGet() throws IOException { @@ -305,11 +344,15 @@ public void testMismatchDetectorIsCalledOnScannerNextOne() waitForMirroringScanner(mirroringScanner); executorServiceRule.waitForExecutor(); - verify(mismatchDetector, times(1)).scannerNext(scan, 0, expected1, expected1); - verify(mismatchDetector, times(1)).scannerNext(scan, 1, expected2, expected2); - verify(mismatchDetector, times(1)).scannerNext(scan, 2, (Result) null, null); + verify(mismatchDetector, times(1)) + .scannerNext(eq(scan), any(ScannerResultVerifier.class), eq(expected1), eq(expected1)); + verify(mismatchDetector, times(1)) + .scannerNext(eq(scan), any(ScannerResultVerifier.class), eq(expected2), eq(expected2)); + verify(mismatchDetector, times(1)) + .scannerNext( + eq(scan), any(ScannerResultVerifier.class), eq((Result) null), eq((Result) null)); verify(mismatchDetector, times(3)) - .scannerNext(eq(scan), anyInt(), (Result) any(), (Result) any()); + .scannerNext(eq(scan), any(ScannerResultVerifier.class), (Result) any(), (Result) any()); } @Test @@ -344,11 +387,14 @@ public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnScannerNext waitForMirroringScanner(mirroringScanner); executorServiceRule.waitForExecutor(); - verify(mismatchDetector, times(1)).scannerNext(scan, 0, expected1, expected1); - verify(mismatchDetector, times(1)).scannerNext(scan, 1, expectedException); - verify(mismatchDetector, times(1)).scannerNext(scan, 2, (Result) null, null); + verify(mismatchDetector, times(1)) + .scannerNext(eq(scan), any(ScannerResultVerifier.class), eq(expected1), eq(expected1)); + verify(mismatchDetector, times(1)).scannerNext(scan, expectedException); + verify(mismatchDetector, times(1)) + .scannerNext( + eq(scan), any(ScannerResultVerifier.class), eq((Result) null), eq((Result) null)); verify(mismatchDetector, times(2)) - .scannerNext(eq(scan), anyInt(), (Result) any(), (Result) any()); + .scannerNext(eq(scan), any(ScannerResultVerifier.class), (Result) any(), (Result) any()); } @Test @@ -375,7 +421,8 @@ public void testMismatchDetectorIsCalledOnScannerNextMultiple() waitForMirroringScanner(mirroringScanner); executorServiceRule.waitForExecutor(); - verify(mismatchDetector, times(1)).scannerNext(scan, 0, expected, expected); + verify(mismatchDetector, times(1)) + .scannerNext(eq(scan), any(ScannerResultVerifier.class), eq(expected), eq(expected)); } @Test @@ -403,7 +450,7 @@ public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnScannerNext waitForMirroringScanner(mirroringScanner); executorServiceRule.waitForExecutor(); - verify(mismatchDetector, times(1)).scannerNext(scan, 0, 2, expectedException); + verify(mismatchDetector, times(1)).scannerNext(scan, 2, expectedException); } @Test @@ -426,7 +473,7 @@ public void testScannerClose() } @Test - public void testScannerRenewLease() + public void testScannerRenewLeaseSecondaryFailed() throws IOException, InterruptedException, ExecutionException, TimeoutException { ResultScanner primaryScannerMock = mock(ResultScanner.class); when(primaryScannerMock.renewLease()).thenReturn(true); @@ -448,33 +495,52 @@ public void testScannerRenewLease() } @Test - public void testClosingTableWithFutureDecreasesListenableCounter() + public void testScannerRenewLeaseSecondaryUnsupported() throws IOException, InterruptedException, ExecutionException, TimeoutException { - ListenableReferenceCounter listenableReferenceCounter = spy(new ListenableReferenceCounter()); - listenableReferenceCounter.holdReferenceUntilClosing(mirroringTable); + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.renewLease()).thenReturn(true); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.renewLease()) + .thenThrow(new UnsupportedOperationException("expected")); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = mirroringTable.getScanner(scan); - verify(listenableReferenceCounter, times(1)).incrementReferenceCount(); - verify(listenableReferenceCounter, never()).decrementReferenceCount(); - final ListenableFuture closingFuture = mirroringTable.asyncClose(); - closingFuture.get(3, TimeUnit.SECONDS); - verify(listenableReferenceCounter, times(1)).decrementReferenceCount(); + // Secondary's renewLease thrown UnsupportedOperationException, thus we assume that it is a + // Bigtable scanner and renewing the lease is not needed. Primary succeeded and we should + // return true. + assertThat(mirroringScanner.renewLease()).isTrue(); + + waitForMirroringScanner(mirroringScanner); + executorServiceRule.waitForExecutor(); + verify(secondaryScannerMock, times(1)).renewLease(); } @Test - public void testClosingTableWithoutFutureDecreasesListenableCounter() throws IOException { - ListenableReferenceCounter listenableReferenceCounter = spy(new ListenableReferenceCounter()); - listenableReferenceCounter.holdReferenceUntilClosing(mirroringTable); + public void testScannerRenewLeasePrimaryUnsupported() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.renewLease()).thenThrow(new UnsupportedOperationException("expected")); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); - verify(listenableReferenceCounter, times(1)).incrementReferenceCount(); - verify(listenableReferenceCounter, never()).decrementReferenceCount(); + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.renewLease()).thenReturn(true); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); - IOException expectedException = new IOException("expected"); - doThrow(expectedException).when(secondaryTable).close(); + Scan scan = new Scan(); + ResultScanner mirroringScanner = mirroringTable.getScanner(scan); - mirroringTable.close(); - executorServiceRule.waitForExecutor(); + // Primary's renewLease thrown UnsupportedOperationException, thus we assume that it is a + // Bigtable scanner and renewing the lease is not needed. Secondary succeeded and we should + // return true. + assertThat(mirroringScanner.renewLease()).isTrue(); - verify(listenableReferenceCounter, times(1)).decrementReferenceCount(); + waitForMirroringScanner(mirroringScanner); + executorServiceRule.waitForExecutor(); + verify(secondaryScannerMock, times(1)).renewLease(); } @Test @@ -496,57 +562,6 @@ public void run() throws Throwable { verify(secondaryTable, times(1)).close(); } - @Test - public void testListenersAreCalledOnClose() - throws IOException, InterruptedException, ExecutionException, TimeoutException { - - final SettableFuture listenerFuture1 = SettableFuture.create(); - mirroringTable.addOnCloseListener( - new Runnable() { - @Override - public void run() { - listenerFuture1.set(1); - } - }); - - final SettableFuture listenerFuture2 = SettableFuture.create(); - mirroringTable.addOnCloseListener( - new Runnable() { - @Override - public void run() { - listenerFuture2.set(2); - } - }); - - mirroringTable.asyncClose().get(3, TimeUnit.SECONDS); - assertThat(listenerFuture1.get(3, TimeUnit.SECONDS)).isEqualTo(1); - assertThat(listenerFuture2.get(3, TimeUnit.SECONDS)).isEqualTo(2); - } - - @Test - public void testListenersAreNotCalledAfterSecondClose() - throws IOException, InterruptedException, ExecutionException, TimeoutException { - - final SettableFuture listenerFuture1 = SettableFuture.create(); - - Runnable onCloseAction = - spy( - new Runnable() { - @Override - public void run() { - listenerFuture1.set(1); - } - }); - - mirroringTable.addOnCloseListener(onCloseAction); - - mirroringTable.asyncClose().get(3, TimeUnit.SECONDS); - assertThat(listenerFuture1.get(3, TimeUnit.SECONDS)).isEqualTo(1); - mirroringTable.asyncClose().get(3, TimeUnit.SECONDS); - - verify(onCloseAction, times(1)).run(); - } - @Test public void testPutIsMirrored() throws IOException, InterruptedException { Put put = createPut("test", "f1", "q1", "v1"); @@ -643,7 +658,7 @@ public void testBatchGetAndPutGetsAreVerifiedOnSuccess() verify(secondaryTable, times(1)).batch(eq(secondaryRequests), argument.capture()); assertThat(argument.getValue().length).isEqualTo(2); - // successful secondary reads were reported + // failed secondary reads were reported verify(mismatchDetector, times(1)) .batch(Arrays.asList(get1), new Result[] {get1Result}, new Result[] {get1Result}); } @@ -717,11 +732,11 @@ public void testBatchGetAndPut() throws IOException, InterruptedException { verify(secondaryWriteErrorConsumer, times(1)) .consume(eq(HBaseOperation.BATCH), eq(put1), (Throwable) isNull()); - // successful secondary reads were reported + // failed secondary reads were reported verify(mismatchDetector, times(1)) .batch(Arrays.asList(get3), new Result[] {get3Result}, new Result[] {get3Result}); - // successful secondary reads were reported + // failed secondary reads were reported verify(mismatchDetector, times(1)).batch(eq(Arrays.asList(get1)), any(IOException.class)); } @@ -758,7 +773,7 @@ public void testBatchGetsPrimaryFailsSecondaryOk() throws IOException, Interrupt verify(secondaryTable, times(1)).batch(eq(secondaryRequests), argument.capture()); assertThat(argument.getValue().length).isEqualTo(1); - // successful secondary reads were reported + // failed secondary reads were reported verify(mismatchDetector, times(1)) .batch(Arrays.asList(get2), new Result[] {get2Result}, new Result[] {get2Result}); @@ -821,12 +836,7 @@ public void testCheckAndPut() throws IOException { .thenReturn(true); mirroringTable.checkAndPut( - "r1".getBytes(), - "f1".getBytes(), - "q1".getBytes(), - CompareOp.GREATER_OR_EQUAL, - "v1".getBytes(), - put); + "r1".getBytes(), "f1".getBytes(), "q1".getBytes(), "v1".getBytes(), put); executorServiceRule.waitForExecutor(); verify(secondaryTable, times(1)).mutateRow(any(RowMutations.class)); @@ -844,19 +854,11 @@ public void testCheckAndDelete() throws IOException { any(RowMutations.class))) .thenReturn(true); - mirroringTable.checkAndDelete( - "r1".getBytes(), - "f1".getBytes(), - "q1".getBytes(), - CompareOp.GREATER_OR_EQUAL, - "v1".getBytes(), - delete); - mirroringTable.checkAndDelete( "r1".getBytes(), "f1".getBytes(), "q1".getBytes(), "v1".getBytes(), delete); executorServiceRule.waitForExecutor(); - verify(secondaryTable, times(2)).mutateRow(any(RowMutations.class)); + verify(secondaryTable, times(1)).mutateRow(any(RowMutations.class)); } @Test @@ -997,6 +999,108 @@ public int compare(Cell a, Cell b) { }); } + @Test + public void testAppendWhichDoesntWantResult() throws IOException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + Append appendIgnoringResult = new Append(row).setReturnResults(false); + + when(primaryTable.append(any(Append.class))) + .thenReturn( + Result.create( + new Cell[] { + CellUtil.createCell(row, family, qualifier, ts, Type.Put.getCode(), value) + })); + Result appendWithoutResult = mirroringTable.append(appendIgnoringResult); + + ArgumentCaptor appendCaptor = ArgumentCaptor.forClass(Append.class); + verify(primaryTable, times(1)).append(appendCaptor.capture()); + assertThat(appendCaptor.getValue().isReturnResults()).isTrue(); + assertThat(appendWithoutResult).isNull(); + } + + @Test + public void testIncrementWhichDoesntWantResult() throws IOException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + Increment incrementIgnoringResult = new Increment(row).setReturnResults(false); + + when(primaryTable.increment(any(Increment.class))) + .thenReturn( + Result.create( + new Cell[] { + CellUtil.createCell(row, family, qualifier, ts, Type.Put.getCode(), value) + })); + Result incrementWithoutResult = mirroringTable.increment(incrementIgnoringResult); + + ArgumentCaptor incrementCaptor = ArgumentCaptor.forClass(Increment.class); + verify(primaryTable, times(1)).increment(incrementCaptor.capture()); + assertThat(incrementCaptor.getValue().isReturnResults()).isTrue(); + assertThat(incrementWithoutResult.value()).isNull(); + } + + @Test + public void testBatchAppendWhichDoesntWantResult() throws IOException, InterruptedException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + List batchAppendIgnoringResult = + Collections.singletonList(new Append(row).setReturnResults(false)); + + mockBatch( + primaryTable, + batchAppendIgnoringResult.get(0), + Result.create( + new Cell[] { + CellUtil.createCell(row, family, qualifier, ts, Type.Put.getCode(), value) + })); + Object[] batchAppendWithoutResult = mirroringTable.batch(batchAppendIgnoringResult); + + ArgumentCaptor> listCaptor = ArgumentCaptor.forClass(List.class); + verify(primaryTable, times(1)).batch(listCaptor.capture(), any(Object[].class)); + assertThat(listCaptor.getValue().size()).isEqualTo(1); + assertThat(((Append) listCaptor.getValue().get(0)).isReturnResults()).isTrue(); + assertThat(((Result) batchAppendWithoutResult[0]).value()).isNull(); + } + + @Test + public void testBatchIncrementWhichDoesntWantResult() throws IOException, InterruptedException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + List batchIncrementIgnoringResult = + Collections.singletonList(new Increment(row).setReturnResults(false)); + + mockBatch( + primaryTable, + batchIncrementIgnoringResult.get(0), + Result.create( + new Cell[] { + CellUtil.createCell(row, family, qualifier, ts, Type.Put.getCode(), value) + })); + Object[] batchIncrementWithoutResult = mirroringTable.batch(batchIncrementIgnoringResult); + + ArgumentCaptor> listCaptor = ArgumentCaptor.forClass(List.class); + verify(primaryTable, times(1)).batch(listCaptor.capture(), any(Object[].class)); + assertThat(listCaptor.getValue().size()).isEqualTo(1); + assertThat(((Increment) listCaptor.getValue().get(0)).isReturnResults()).isTrue(); + assertThat(((Result) batchIncrementWithoutResult[0]).value()).isNull(); + } + @Test public void testBatchWithCallback() throws IOException, InterruptedException { List mutations = Arrays.asList(createGet("get1")); @@ -1157,6 +1261,8 @@ public void testBatchWithAppendsAndIncrements() throws IOException, InterruptedE @Test public void testConcurrentWritesAreFlowControlledBeforePrimaryAction() throws IOException, InterruptedException { + boolean performWritesConcurrently = true; + boolean waitForSecondaryWrites = true; this.mirroringTable = spy( new MirroringTable( @@ -1167,9 +1273,12 @@ public void testConcurrentWritesAreFlowControlledBeforePrimaryAction() flowController, secondaryWriteErrorConsumer, new ReadSampler(100), - true, - new MirroringTracer())); - + this.timestamper, + performWritesConcurrently, + waitForSecondaryWrites, + this.mirroringTracer, + this.referenceCounter, + 10)); Put put1 = createPut("r1", "f1", "q1", "v1"); // Both batches should be called even if first one fails. @@ -1194,7 +1303,7 @@ public void testConcurrentWritesAreFlowControlledBeforePrimaryAction() @Test public void testNonConcurrentOpsWontBePerformedConcurrently() throws IOException, InterruptedException { - setupMirroringTableWithDirectExecutor(); + setupConcurrentMirroringTableWithDirectExecutor(); Get get = createGet("get1"); Increment increment = new Increment("row".getBytes()); Append append = new Append("row".getBytes()); @@ -1202,6 +1311,12 @@ public void testNonConcurrentOpsWontBePerformedConcurrently() Put put = createPut("test1", "f1", "q1", "v1"); Delete delete = createDelete("test2"); + Put putMutation = createPut("test3", "f1", "q1", "v1"); + Delete deleteMutation = createDelete("test3"); + RowMutations rowMutations = new RowMutations("test3".getBytes()); + rowMutations.add(putMutation); + rowMutations.add(deleteMutation); + mockBatch( primaryTable, secondaryTable, @@ -1212,20 +1327,32 @@ public void testNonConcurrentOpsWontBePerformedConcurrently() append, createResult("row", "v2")); + // Only Puts and Deletes (and RowMutations which can contain only Puts and Deletes) + // can be performed concurrently. Other operations force us to wait for primary result + // (e.g. we implement Increment on secondary as a Put of result from primary). + // We expect that even though our MirroringTable is concurrent, the operations which cannot be + // performed concurrently will be performed sequentially. + + // Batch contains an operation which causes the batch to be performed sequentially. checkBatchCalledSequentially(Arrays.asList(get)); checkBatchCalledSequentially(Arrays.asList(increment)); checkBatchCalledSequentially(Arrays.asList(append)); + // Batch contains only operations which can be performed concurrently. checkBatchCalledConcurrently(Arrays.asList(put)); checkBatchCalledConcurrently(Arrays.asList(delete)); - checkBatchCalledConcurrently(Arrays.asList(put, delete)); + checkBatchCalledConcurrently(Arrays.asList(rowMutations)); + checkBatchCalledConcurrently(Arrays.asList(put, delete, rowMutations)); - checkBatchCalledSequentially(Arrays.asList(put, delete, get)); - checkBatchCalledSequentially(Arrays.asList(put, delete, increment)); - checkBatchCalledSequentially(Arrays.asList(put, delete, append)); + // Batch contains an operation which causes the batch to be performed sequentially. + checkBatchCalledSequentially(Arrays.asList(put, delete, rowMutations, get)); + checkBatchCalledSequentially(Arrays.asList(put, delete, rowMutations, increment)); + checkBatchCalledSequentially(Arrays.asList(put, delete, rowMutations, append)); } - private void setupMirroringTableWithDirectExecutor() { + private void setupConcurrentMirroringTableWithDirectExecutor() { + boolean performWritesConcurrently = true; + boolean waitForSecondaryWrites = true; this.mirroringTable = spy( new MirroringTable( @@ -1236,8 +1363,12 @@ private void setupMirroringTableWithDirectExecutor() { flowController, secondaryWriteErrorConsumer, new ReadSampler(100), - true, - new MirroringTracer())); + this.timestamper, + performWritesConcurrently, + waitForSecondaryWrites, + this.mirroringTracer, + this.referenceCounter, + 10)); } private void checkBatchCalledSequentially(List requests) @@ -1254,13 +1385,19 @@ private void checkBatchCalledConcurrently(List requests) InOrder inOrder = Mockito.inOrder(primaryTable, flowController, secondaryTable); this.mirroringTable.batch(requests, new Object[requests.size()]); inOrder.verify(flowController).asyncRequestResource(any(RequestResourcesDescription.class)); + // When batch is performed concurrently first secondary database request is scheduled + // asynchronously, then primary request is executed synchronously. + // Then depending on configuration the mirroring client may wait for secondary result. + // In order to be able to verify that the batch is called concurrently we configure the + // MirroringTable to wait for secondary results and use DirectExecutor. + // That guarantees us that the method on secondary table is called first. inOrder.verify(secondaryTable).batch(eq(requests), any(Object[].class)); inOrder.verify(primaryTable).batch(eq(requests), any(Object[].class)); } @Test public void testConcurrentWritesWithErrors() throws IOException, InterruptedException { - setupMirroringTableWithDirectExecutor(); + setupConcurrentMirroringTableWithDirectExecutor(); Put put1 = createPut("test1", "f1", "q1", "v1"); Put put2 = createPut("test2", "f2", "q2", "v2"); @@ -1277,9 +1414,7 @@ public void testConcurrentWritesWithErrors() throws IOException, InterruptedExce // | p1 | p2 | p3 | p4 | d1 | d2 | d3 | d4 // primary | v | v | x | x | v | v | x | x // secondary | v | x | v | x | v | x | v | x - // Primary errors should be visible in the results. - // Secondary errors should be written to the faillog. - // Operations that failed on both primary and secondary shouldn't be reported to the faillog. + // All errors should be visible in the results. IOException put2exception = new IOException("put2"); IOException put3exception = new IOException("put3"); @@ -1334,24 +1469,39 @@ public void testConcurrentWritesWithErrors() throws IOException, InterruptedExce } catch (IOException ignored) { } assertThat(results[0]).isInstanceOf(Result.class); - assertThat(results[1]).isInstanceOf(Result.class); + assertThat(results[1]).isEqualTo(put2exception); assertThat(results[2]).isEqualTo(put3exception); assertThat(results[3]).isEqualTo(put4exception); assertThat(results[4]).isInstanceOf(Result.class); - assertThat(results[5]).isInstanceOf(Result.class); + assertThat(results[5]).isEqualTo(delete2exception); assertThat(results[6]).isEqualTo(delete3exception); assertThat(results[7]).isEqualTo(delete4exception); - verify(secondaryWriteErrorConsumer, times(1)) - .consume(HBaseOperation.BATCH, put2, put2exception); - verify(secondaryWriteErrorConsumer, times(1)) - .consume(HBaseOperation.BATCH, delete2, delete2exception); + verify(secondaryWriteErrorConsumer, never()) + .consume(any(HBaseOperation.class), any(Put.class), any(Throwable.class)); } @Test public void testConcurrentOpsAreRunConcurrently() throws IOException, InterruptedException { - setupMirroringTableWithDirectExecutor(); + boolean performWritesConcurrently = true; + boolean waitForSecondaryWrites = true; + this.mirroringTable = + spy( + new MirroringTable( + primaryTable, + secondaryTable, + this.executorServiceRule.executorService, + mismatchDetector, + flowController, + secondaryWriteErrorConsumer, + new ReadSampler(100), + this.timestamper, + performWritesConcurrently, + waitForSecondaryWrites, + this.mirroringTracer, + this.referenceCounter, + 10)); Put put = createPut("test1", "f1", "q1", "v1"); mockBatch(primaryTable, secondaryTable); @@ -1393,19 +1543,298 @@ public Object answer(InvocationOnMock invocationOnMock) throws Throwable { public void testConcurrentOpsAreNotPerformedWhenFlowControllerRejectsRequest() throws IOException, InterruptedException { IOException flowControllerExpection = setupFlowControllerToRejectRequests(flowController); - setupMirroringTableWithDirectExecutor(); + setupConcurrentMirroringTableWithDirectExecutor(); Put put = createPut("test1", "f1", "q1", "v1"); try { mirroringTable.put(put); fail("should throw"); } catch (IOException e) { - // FlowController exception is wrapped in IOException by mirroringTable and in - // ExecutionException by a future. - assertThat(e).hasCauseThat().hasCauseThat().isEqualTo(flowControllerExpection); + // FlowController exception is wrapped in IOException by mirroringTable. + assertThat(e).hasCauseThat().isEqualTo(flowControllerExpection); } verify(primaryTable, never()).batch(ArgumentMatchers.anyList(), any(Object[].class)); verify(secondaryTable, never()).batch(ArgumentMatchers.anyList(), any(Object[].class)); } + + @Test + public void testUnmatchedScannerResultQueuesAreFlushedWhenResultScannerIsClosed() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ScannerResultVerifier verifier = + spy(mismatchDetector.createScannerResultVerifier(new Scan(), 100)); + when(mismatchDetector.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(verifier); + + Result expected1 = createResult("test1", "value1"); + Result expected2 = createResult("test2", "value2"); + + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.next()).thenReturn(expected1); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.next()).thenReturn(expected2); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = spy(mirroringTable.getScanner(scan)); + assertThat(mirroringScanner.next()).isEqualTo(expected1); + verify(mirroringMetricsRecorder, never()) + .recordReadMismatches(any(HBaseOperation.class), eq(1)); + waitForMirroringScanner(mirroringScanner); + + verify(verifier, times(1)).verify(any(Result[].class), any(Result[].class)); + verify(mirroringMetricsRecorder, never()) + .recordReadMatches(any(HBaseOperation.class), anyInt()); + verify(mirroringMetricsRecorder, times(2)) + .recordReadMismatches(any(HBaseOperation.class), eq(1)); + } + + @Test + public void testScannerResultVerifierWithBufferSizeZero() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ScannerResultVerifier verifier = + spy(mismatchDetector.createScannerResultVerifier(new Scan(), 0)); + when(mismatchDetector.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(verifier); + + Result expected1 = createResult("test1", "value1"); + Result expected2 = createResult("test2", "value2"); + Result expected3 = createResult("test3", "value3"); + Result expected4 = createResult("test4", "value4"); + Result expected5 = createResult("test5", "value5"); + Result expected6 = createResult("test6", "value6"); + Result expected7 = createResult("test7", "value7"); + + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.next()) + .thenReturn(expected1, expected2, expected3, expected4, expected5, expected6, expected7); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.next()) + .thenReturn(expected1, expected3, expected5, expected7, null, null, null); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = spy(mirroringTable.getScanner(scan)); + assertThat(mirroringScanner.next()).isEqualTo(expected1); + assertThat(mirroringScanner.next()).isEqualTo(expected2); + assertThat(mirroringScanner.next()).isEqualTo(expected3); + assertThat(mirroringScanner.next()).isEqualTo(expected4); + assertThat(mirroringScanner.next()).isEqualTo(expected5); + assertThat(mirroringScanner.next()).isEqualTo(expected6); + assertThat(mirroringScanner.next()).isEqualTo(expected7); + waitForMirroringScanner(mirroringScanner); + + verify(verifier, times(7)).verify(any(Result[].class), any(Result[].class)); + verify(mirroringMetricsRecorder, times(1)).recordReadMatches(any(HBaseOperation.class), eq(1)); + verify(mirroringMetricsRecorder, times(9)) + .recordReadMismatches(any(HBaseOperation.class), eq(1)); + } + + @Test + public void testScannerUnmatchedBufferSpaceRunsOut() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ScannerResultVerifier verifier = + spy(mismatchDetector.createScannerResultVerifier(new Scan(), 2)); + when(mismatchDetector.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(verifier); + + Result expected1 = createResult("test1", "value1"); + Result expected2 = createResult("test2", "value2"); + Result expected3 = createResult("test3", "value3"); + Result expected4 = createResult("test4", "value4"); + Result expected5 = createResult("test5", "value5"); + Result expected6 = createResult("test6", "value6"); + Result expected7 = createResult("test7", "value7"); + + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.next()).thenReturn(expected1, expected2, expected3, expected4); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.next()).thenReturn(expected4, expected5, expected6, expected7); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = spy(mirroringTable.getScanner(scan)); + assertThat(mirroringScanner.next()).isEqualTo(expected1); + assertThat(mirroringScanner.next()).isEqualTo(expected2); + assertThat(mirroringScanner.next()).isEqualTo(expected3); + assertThat(mirroringScanner.next()).isEqualTo(expected4); + waitForMirroringScanner(mirroringScanner); + + verify(verifier, times(4)).verify(any(Result[].class), any(Result[].class)); + verify(mirroringMetricsRecorder, never()) + .recordReadMatches(any(HBaseOperation.class), anyInt()); + verify(mirroringMetricsRecorder, times(8)) + .recordReadMismatches(any(HBaseOperation.class), eq(1)); + } + + @Test + public void testScannerUnmatchedBufferSpaceRunsOutThenReturnsMatches() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ScannerResultVerifier verifier = + spy(mismatchDetector.createScannerResultVerifier(new Scan(), 2)); + when(mismatchDetector.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(verifier); + + Result expected1 = createResult("test1", "value1"); + Result expected2 = createResult("test2", "value2"); + Result expected3 = createResult("test3", "value3"); + Result expected4 = createResult("test4", "value4"); + Result expected5 = createResult("test5", "value5"); + Result expected6 = createResult("test6", "value6"); + Result expected7 = createResult("test7", "value7"); + + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.next()) + .thenReturn(expected1, expected2, expected3, expected4, expected1, expected2, expected3); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.next()) + .thenReturn(expected4, expected5, expected6, expected7, expected1, expected2, expected3); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = spy(mirroringTable.getScanner(scan)); + assertThat(mirroringScanner.next()).isEqualTo(expected1); + assertThat(mirroringScanner.next()).isEqualTo(expected2); + assertThat(mirroringScanner.next()).isEqualTo(expected3); + assertThat(mirroringScanner.next()).isEqualTo(expected4); + assertThat(mirroringScanner.next()).isEqualTo(expected1); + assertThat(mirroringScanner.next()).isEqualTo(expected2); + assertThat(mirroringScanner.next()).isEqualTo(expected3); + waitForMirroringScanner(mirroringScanner); + + verify(verifier, times(7)).verify(any(Result[].class), any(Result[].class)); + verify(mirroringMetricsRecorder, times(3)) + .recordReadMatches(any(HBaseOperation.class), anyInt()); + verify(mirroringMetricsRecorder, times(8)) + .recordReadMismatches(any(HBaseOperation.class), eq(1)); + } + + @Test + public void testScannerSkippedSomeResults() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ScannerResultVerifier verifier = + spy(mismatchDetector.createScannerResultVerifier(new Scan(), 100)); + when(mismatchDetector.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(verifier); + + Result expected1 = createResult("test1", "value1"); + Result expected2 = createResult("test2", "value2"); + Result expected3 = createResult("test3", "value3"); + Result expected4 = createResult("test4", "value4"); + Result expected5 = createResult("test5", "value5"); + Result expected6 = createResult("test6", "value6"); + Result expected7 = createResult("test7", "value7"); + + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.next()) + .thenReturn(expected1, expected2, expected3, expected5, expected6, expected7); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.next()) + .thenReturn(expected1, expected3, expected4, expected5, expected6, null); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = spy(mirroringTable.getScanner(scan)); + assertThat(mirroringScanner.next()).isEqualTo(expected1); + assertThat(mirroringScanner.next()).isEqualTo(expected2); + assertThat(mirroringScanner.next()).isEqualTo(expected3); + assertThat(mirroringScanner.next()).isEqualTo(expected5); + assertThat(mirroringScanner.next()).isEqualTo(expected6); + assertThat(mirroringScanner.next()).isEqualTo(expected7); + waitForMirroringScanner(mirroringScanner); + + verify(verifier, times(6)).verify(any(Result[].class), any(Result[].class)); + verify(mirroringMetricsRecorder, times(4)) + .recordReadMatches(any(HBaseOperation.class), anyInt()); + verify(mirroringMetricsRecorder, times(3)) + .recordReadMismatches(any(HBaseOperation.class), eq(1)); + } + + @Test + public void testScannerValueMismatchIsDetected() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ScannerResultVerifier verifier = + spy(mismatchDetector.createScannerResultVerifier(new Scan(), 100)); + when(mismatchDetector.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(verifier); + + Result expected1 = createResult("test1", "value1"); + Result expected2 = createResult("test2", "value2"); + Result expected31 = createResult("test3", "value31"); + Result expected32 = createResult("test3", "value32"); + Result expected4 = createResult("test4", "value4"); + + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.next()).thenReturn(expected1, expected2, expected31, expected4); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.next()).thenReturn(expected1, expected2, expected32, expected4); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = spy(mirroringTable.getScanner(scan)); + assertThat(mirroringScanner.next()).isEqualTo(expected1); + assertThat(mirroringScanner.next()).isEqualTo(expected2); + assertThat(mirroringScanner.next()).isEqualTo(expected31); + assertThat(mirroringScanner.next()).isEqualTo(expected4); + waitForMirroringScanner(mirroringScanner); + + verify(verifier, times(4)).verify(any(Result[].class), any(Result[].class)); + verify(mirroringMetricsRecorder, times(3)).recordReadMatches(HBaseOperation.NEXT, 1); + verify(mirroringMetricsRecorder, times(1)).recordReadMismatches(HBaseOperation.NEXT, 1); + } + + @Test + public void testScannerRowsResynchronization() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + ScannerResultVerifier verifier = + spy(mismatchDetector.createScannerResultVerifier(new Scan(), 100)); + when(mismatchDetector.createScannerResultVerifier(any(Scan.class), anyInt())) + .thenReturn(verifier); + + Result expected11 = createResult("test11", "value1"); + Result expected12 = createResult("test12", "value1"); + Result expected21 = createResult("test21", "value2"); + Result expected22 = createResult("test22", "value2"); + Result expected31 = createResult("test31", "value3"); + Result expected32 = createResult("test32", "value3"); + Result expected4 = createResult("test4", "value4"); + Result expected51 = createResult("test51", "value5"); + Result expected52 = createResult("test52", "value5"); + + ResultScanner primaryScannerMock = mock(ResultScanner.class); + when(primaryScannerMock.next()) + .thenReturn(expected11, expected21, expected31, expected4, expected51); + when(primaryTable.getScanner((Scan) any())).thenReturn(primaryScannerMock); + + ResultScanner secondaryScannerMock = mock(ResultScanner.class); + when(secondaryScannerMock.next()) + .thenReturn(expected12, expected22, expected32, expected4, expected52); + when(secondaryTable.getScanner((Scan) any())).thenReturn(secondaryScannerMock); + + Scan scan = new Scan(); + ResultScanner mirroringScanner = spy(mirroringTable.getScanner(scan)); + assertThat(mirroringScanner.next()).isEqualTo(expected11); + assertThat(mirroringScanner.next()).isEqualTo(expected21); + assertThat(mirroringScanner.next()).isEqualTo(expected31); + assertThat(mirroringScanner.next()).isEqualTo(expected4); + assertThat(mirroringScanner.next()).isEqualTo(expected51); + executorServiceRule.waitForExecutor(); + + verify(verifier, times(5)).verify(any(Result[].class), any(Result[].class)); + verify(mirroringMetricsRecorder, times(1)).recordReadMatches(HBaseOperation.NEXT, 1); + // 51 and 52 were not yet compared + verify(mirroringMetricsRecorder, times(6)).recordReadMismatches(HBaseOperation.NEXT, 1); + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableInputModification.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableInputModification.java index e2d527869c..8243dae352 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableInputModification.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableInputModification.java @@ -24,6 +24,7 @@ import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.eq; import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.mock; import static org.mockito.Mockito.spy; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; @@ -32,6 +33,9 @@ import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.common.util.concurrent.SettableFuture; import java.io.IOException; @@ -71,9 +75,10 @@ public class TestMirroringTableInputModification { @Mock MismatchDetector mismatchDetector; @Mock FlowController flowController; @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; + Timestamper timestamper = new NoopTimestamper(); MirroringTable mirroringTable; - SettableFuture secondaryOperationAllowedFuture; + SettableFuture secondaryOperationBlockingFuture; @Before public void setUp() throws IOException, InterruptedException { @@ -88,25 +93,35 @@ public void setUp() throws IOException, InterruptedException { flowController, secondaryWriteErrorConsumer, new ReadSampler(100), + timestamper, false, - new MirroringTracer())); + false, + new MirroringTracer(), + mock(ReferenceCounter.class), + 10)); + + this.secondaryOperationBlockingFuture = SettableFuture.create(); mockExistsAll(this.primaryTable); mockGet(this.primaryTable); mockBatch(this.primaryTable); - secondaryOperationAllowedFuture = SettableFuture.create(); + secondaryOperationBlockingFuture = SettableFuture.create(); - blockMethodCall(secondaryTable, secondaryOperationAllowedFuture) + blockMethodCall(secondaryTable, secondaryOperationBlockingFuture) .existsAll(ArgumentMatchers.anyList()); - blockMethodCall(this.secondaryTable, secondaryOperationAllowedFuture) + blockMethodCall(this.secondaryTable, secondaryOperationBlockingFuture) .batch(ArgumentMatchers.anyList(), (Object[]) any()); - blockMethodCall(this.secondaryTable, secondaryOperationAllowedFuture) + blockMethodCall(this.secondaryTable, secondaryOperationBlockingFuture) .get(ArgumentMatchers.anyList()); } @Test public void testExistsAll() throws IOException { + mockExistsAll(this.primaryTable); + blockMethodCall(secondaryTable, secondaryOperationBlockingFuture) + .existsAll(ArgumentMatchers.anyList()); + List gets = createGets("k1", "k2", "k3"); List inputList = new ArrayList<>(gets); @@ -114,7 +129,7 @@ public void testExistsAll() throws IOException { verify(this.primaryTable, times(1)).existsAll(inputList); inputList.clear(); // User modifies the list - secondaryOperationAllowedFuture.set(null); + secondaryOperationBlockingFuture.set(null); executorServiceRule.waitForExecutor(); verify(this.secondaryTable, times(1)).existsAll(gets); @@ -122,6 +137,10 @@ public void testExistsAll() throws IOException { @Test public void testGet() throws IOException { + mockGet(this.primaryTable); + blockMethodCall(this.secondaryTable, secondaryOperationBlockingFuture) + .get(ArgumentMatchers.anyList()); + List gets = createGets("k1", "k2", "k3"); List inputList = new ArrayList<>(gets); @@ -129,7 +148,7 @@ public void testGet() throws IOException { verify(this.primaryTable, times(1)).get(inputList); inputList.clear(); // User modifies the list - secondaryOperationAllowedFuture.set(null); + secondaryOperationBlockingFuture.set(null); executorServiceRule.waitForExecutor(); verify(this.secondaryTable, times(1)).get(gets); @@ -137,6 +156,10 @@ public void testGet() throws IOException { @Test public void testPut() throws IOException, InterruptedException { + mockBatch(this.primaryTable); + blockMethodCall(this.secondaryTable, secondaryOperationBlockingFuture) + .batch(ArgumentMatchers.anyList(), (Object[]) any()); + List puts = Collections.singletonList(createPut("r", "f", "q", "v")); List inputList = new ArrayList<>(puts); @@ -144,7 +167,7 @@ public void testPut() throws IOException, InterruptedException { verify(this.primaryTable, times(1)).batch(eq(inputList), (Object[]) any()); inputList.clear(); // User modifies the list - secondaryOperationAllowedFuture.set(null); + secondaryOperationBlockingFuture.set(null); executorServiceRule.waitForExecutor(); verify(this.secondaryTable, times(1)).batch(eq(puts), (Object[]) any()); @@ -152,13 +175,17 @@ public void testPut() throws IOException, InterruptedException { @Test public void testDelete() throws IOException, InterruptedException { + mockBatch(this.primaryTable); + blockMethodCall(this.secondaryTable, secondaryOperationBlockingFuture) + .batch(ArgumentMatchers.anyList(), (Object[]) any()); + List puts = Collections.singletonList(new Delete("r".getBytes())); List inputList = new ArrayList<>(puts); this.mirroringTable.delete(inputList); // inputList is modified by the call verify(this.primaryTable, times(1)).batch(eq(puts), (Object[]) any()); - secondaryOperationAllowedFuture.set(null); + secondaryOperationBlockingFuture.set(null); executorServiceRule.waitForExecutor(); verify(this.secondaryTable, times(1)).batch(eq(puts), (Object[]) any()); @@ -166,6 +193,10 @@ public void testDelete() throws IOException, InterruptedException { @Test public void testBatch() throws IOException, InterruptedException { + mockBatch(this.primaryTable); + blockMethodCall(this.secondaryTable, secondaryOperationBlockingFuture) + .batch(ArgumentMatchers.anyList(), (Object[]) any()); + List ops = Arrays.asList(new Delete("r".getBytes()), createGet("k")); List inputList = new ArrayList<>(ops); @@ -173,7 +204,7 @@ public void testBatch() throws IOException, InterruptedException { verify(this.primaryTable, times(1)).batch(eq(ops), (Object[]) any()); inputList.clear(); // User modifies the list - secondaryOperationAllowedFuture.set(null); + secondaryOperationBlockingFuture.set(null); executorServiceRule.waitForExecutor(); verify(this.secondaryTable, times(1)).batch(eq(ops), (Object[]) any()); diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableSynchronousMode.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableSynchronousMode.java new file mode 100644 index 0000000000..7da31203fd --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestMirroringTableSynchronousMode.java @@ -0,0 +1,344 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createPut; +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createResult; +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.mockBatch; +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerMock; +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerToRejectRequests; +import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException.DatabaseIdentifier; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import java.io.IOException; +import java.util.Arrays; +import java.util.List; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.Table; +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.mockito.ArgumentMatchers; +import org.mockito.Mock; +import org.mockito.junit.MockitoJUnit; +import org.mockito.junit.MockitoRule; + +@RunWith(JUnit4.class) +public class TestMirroringTableSynchronousMode { + @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); + + @Rule + public final ExecutorServiceRule executorServiceRule = + ExecutorServiceRule.singleThreadedExecutor(); + + @Mock Table primaryTable; + @Mock Table secondaryTable; + @Mock MismatchDetector mismatchDetector; + @Mock FlowController flowController; + @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; + Timestamper timestamper = new NoopTimestamper(); + + MirroringTable mirroringTable; + + @Before + public void setUp() { + setupFlowControllerMock(flowController); + setupTable(false); + } + + private void setupTable(boolean concurrent) { + this.mirroringTable = + spy( + new MirroringTable( + primaryTable, + secondaryTable, + executorServiceRule.executorService, + mismatchDetector, + flowController, + secondaryWriteErrorConsumer, + new ReadSampler(100), + this.timestamper, + concurrent, + true, + new MirroringTracer(), + mock(ReferenceCounter.class), + 5)); + } + + @Test + public void testConcurrentFlowControlRejection() throws IOException, InterruptedException { + setupTable(true); + + IOException flowControllerException = setupFlowControllerToRejectRequests(flowController); + + Put put = createPut("test1", "f1", "q1", "v1"); + try { + mirroringTable.put(put); + fail("should throw"); + } catch (IOException e) { + // FlowController exception is wrapped in IOException by mirroringTable. + assertThat(e).hasCauseThat().isEqualTo(flowControllerException); + MirroringOperationException mirroringOperationException = + MirroringOperationException.extractRootCause(e); + assertThat(mirroringOperationException).isNotNull(); + assertThat(mirroringOperationException.databaseIdentifier).isEqualTo(DatabaseIdentifier.Both); + assertThat(mirroringOperationException.operation).isNull(); + assertThat(mirroringOperationException.secondaryException).isNull(); + } + + verify(primaryTable, never()).batch(ArgumentMatchers.anyList(), any(Object[].class)); + verify(secondaryTable, never()).batch(ArgumentMatchers.anyList(), any(Object[].class)); + } + + @Test + public void testConcurrentWithoutErrors() throws IOException, InterruptedException { + setupTable(true); + + Put put = createPut("test1", "f1", "q1", "v1"); + + mockBatch(primaryTable, secondaryTable); + + mirroringTable.put(put); + + verify(primaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + verify(secondaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + } + + @Test + public void testConcurrentWithErrors() throws IOException, InterruptedException { + setupTable(true); + + Put put1 = createPut("test1", "f1", "q1", "v1"); + Put put2 = createPut("test2", "f1", "q1", "v2"); + Put put3 = createPut("test3", "f1", "q1", "v3"); + Put put4 = createPut("test4", "f1", "q1", "v4"); + + List operations = Arrays.asList(put1, put2, put3, put4); + + IOException put1error = new IOException("put1"); + IOException put2error = new IOException("put2"); + IOException put3error1 = new IOException("put3_1"); + IOException put3error2 = new IOException("put3_2"); + + // | p1 | p2 | p3 | p4 + // 1 | x | v | x | v + // 2 | v | x | x | v + + mockBatch(primaryTable, put1, put1error, put3, put3error1); + mockBatch(secondaryTable, put2, put2error, put3, put3error2); + + Object[] results = new Object[4]; + + RetriesExhaustedWithDetailsException exception = null; + try { + mirroringTable.batch(operations, results); + fail("should throw"); + } catch (RetriesExhaustedWithDetailsException e) { + exception = e; + } + + assertThat(results[0]).isInstanceOf(Throwable.class); + assertThat(results[1]).isInstanceOf(Throwable.class); + assertThat(results[2]).isInstanceOf(Throwable.class); + assertThat(results[3]).isNotInstanceOf(Throwable.class); + + Throwable t1 = (Throwable) results[0]; + Throwable t2 = (Throwable) results[1]; + Throwable t3 = (Throwable) results[2]; + + assertThat(t1).isEqualTo(put1error); + assertThat(t2).isEqualTo(put2error); + assertThat(t3).isEqualTo(put3error1); + + assertThat(MirroringOperationException.extractRootCause(t1).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Primary); + assertThat(MirroringOperationException.extractRootCause(t2).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Secondary); + assertThat(MirroringOperationException.extractRootCause(t3).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Both); + assertThat(MirroringOperationException.extractRootCause(t3).secondaryException.exception) + .isEqualTo(put3error2); + + assertThat(exception.getNumExceptions()).isEqualTo(3); + + assertThat(exception.getRow(0)).isEqualTo(put1); + assertThat(exception.getCause(0)).isEqualTo(put1error); + assertThat(exception.getRow(1)).isEqualTo(put2); + assertThat(exception.getCause(1)).isEqualTo(put2error); + assertThat(exception.getRow(2)).isEqualTo(put3); + assertThat(exception.getCause(2)).isEqualTo(put3error1); + assertThat( + MirroringOperationException.extractRootCause(exception.getCause(2)) + .secondaryException + .exception) + .isEqualTo(put3error2); + + verify(primaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + verify(secondaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + } + + @Test + public void testSequentialFlowControlRejection() throws IOException, InterruptedException { + setupTable(false); + + IOException flowControllerException = setupFlowControllerToRejectRequests(flowController); + + mockBatch(primaryTable); + + Put put = createPut("test1", "f1", "q1", "v1"); + Object[] results = new Object[1]; + try { + mirroringTable.batch(Arrays.asList(put), results); + fail("should throw"); + } catch (RetriesExhaustedWithDetailsException e) { + // FlowController exception is wrapped in IOException by mirroringTable and in + // ExecutionException by a future. + assertThat(e.getNumExceptions()).isEqualTo(1); + assertThat(e.getCause(0)).isEqualTo(flowControllerException); + MirroringOperationException mirroringException = + MirroringOperationException.extractRootCause(e.getCause(0)); + assertThat(mirroringException.operation).isEqualTo(put); + assertThat(mirroringException.databaseIdentifier).isEqualTo(DatabaseIdentifier.Secondary); + assertThat(mirroringException.secondaryException).isNull(); + } + + verify(primaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + verify(secondaryTable, never()).batch(ArgumentMatchers.anyList(), any(Object[].class)); + } + + @Test + public void testSequentialWithoutErrors() throws IOException, InterruptedException { + setupTable(false); + + Put put = createPut("test1", "f1", "q1", "v1"); + + mockBatch(primaryTable, secondaryTable); + + mirroringTable.put(put); + + verify(primaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + verify(secondaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + } + + @Test + public void testSequentialWithErrors() throws IOException, InterruptedException { + setupTable(false); + + Put put1 = createPut("test1", "f1", "q1", "v1"); + Increment incr2 = new Increment("test2".getBytes()); + incr2.addColumn("f1".getBytes(), "q1".getBytes(), 1); + Put put3 = createPut("test3", "f1", "q1", "v3"); + Put put4 = createPut("test4", "f1", "q1", "v4"); + + List operations = Arrays.asList(put1, incr2, put3, put4); + + IOException put1error = new IOException("put1"); + IOException put2error = new IOException("incr2"); + IOException put3error1 = new IOException("put3_1"); + IOException put3error2 = new IOException("put3_2"); + + // | p1 | p2 | p3 | p4 + // 1 | x | v | x | v + // 2 | v | x | x | v + + mockBatch( + primaryTable, + put1, + put1error, + incr2, + createResult("test2", "f1", "q1", 123, "v11"), + put3, + put3error1); + mockBatch( + secondaryTable, + Put.class, + put2error, + put3, + put3error2, + put1, + new Result(), + put4, + new Result()); + + Object[] results = new Object[4]; + + RetriesExhaustedWithDetailsException exception = null; + try { + mirroringTable.batch(operations, results); + fail("should throw"); + } catch (RetriesExhaustedWithDetailsException e) { + exception = e; + } + + assertThat(results[0]).isInstanceOf(Throwable.class); + assertThat(results[1]).isInstanceOf(Throwable.class); + assertThat(results[2]).isInstanceOf(Throwable.class); + assertThat(results[3]).isNotInstanceOf(Throwable.class); + + Throwable t1 = (Throwable) results[0]; + Throwable t2 = (Throwable) results[1]; + Throwable t3 = (Throwable) results[2]; + + assertThat(t1).isEqualTo(put1error); + assertThat(t2).isEqualTo(put2error); + assertThat(t3).isEqualTo(put3error1); + + assertThat(MirroringOperationException.extractRootCause(t1).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Primary); + assertThat(MirroringOperationException.extractRootCause(t2).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Secondary); + assertThat(MirroringOperationException.extractRootCause(t2).operation).isInstanceOf(Put.class); + assertThat(MirroringOperationException.extractRootCause(t3).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Primary); // Sequential - not run on secondary + assertThat(MirroringOperationException.extractRootCause(t3).secondaryException).isNull(); + + assertThat(exception.getNumExceptions()).isEqualTo(3); + + assertThat(exception.getRow(0)).isEqualTo(put1); + assertThat(exception.getCause(0)).isEqualTo(put1error); + assertThat(exception.getRow(1)).isEqualTo(incr2); + assertThat(exception.getCause(1)).isEqualTo(put2error); + assertThat(exception.getRow(2)).isEqualTo(put3); + assertThat(exception.getCause(2)).isEqualTo(put3error1); + assertThat( + MirroringOperationException.extractRootCause(exception.getCause(2)).secondaryException) + .isNull(); + + verify(primaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + verify(secondaryTable, times(1)).batch(ArgumentMatchers.anyList(), any(Object[].class)); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestVerificationSampling.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestVerificationSampling.java index 9dfe8b09d4..25c10af1e4 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestVerificationSampling.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/TestVerificationSampling.java @@ -33,6 +33,9 @@ import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.common.collect.ImmutableList; import java.io.IOException; @@ -60,6 +63,7 @@ @RunWith(JUnit4.class) public class TestVerificationSampling { @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); + Timestamper timestamper = new NoopTimestamper(); @Rule public final ExecutorServiceRule executorServiceRule = @@ -90,8 +94,12 @@ public void setUp() { flowController, secondaryWriteErrorConsumer, readSampler, + timestamper, false, - new MirroringTracer())); + false, + new MirroringTracer(), + mock(ReferenceCounter.class), + 10)); } @Test @@ -165,7 +173,7 @@ public void isExistsAllSampled() throws IOException { } @Test - public void isBatchSampled() throws IOException, InterruptedException { + public void isBatchSampledWithSamplingEnabled() throws IOException, InterruptedException { Put put = createPut("test", "test", "test", "test"); List ops = ImmutableList.of(get, put); @@ -183,17 +191,38 @@ public Void answer(InvocationOnMock invocationOnMock) throws Throwable { .when(primaryTable) .batch(eq(ops), any(Object[].class)); - withSamplingEnabled(false); + withSamplingEnabled(true); mirroringTable.batch(ops); + executorServiceRule.waitForExecutor(); verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); verify(primaryTable, times(1)).batch(eq(ops), any(Object[].class)); + verify(secondaryTable, times(1)).batch(eq(ops), any(Object[].class)); + } - withSamplingEnabled(true); + @Test + public void isBatchSampledWithSamplingDisabled() throws IOException, InterruptedException { + Put put = createPut("test", "test", "test", "test"); + List ops = ImmutableList.of(get, put); + + doAnswer( + new Answer() { + @Override + public Void answer(InvocationOnMock invocationOnMock) throws Throwable { + Object[] args = invocationOnMock.getArguments(); + Object[] result = (Object[]) args[1]; + result[0] = Result.create(new Cell[0]); + result[1] = Result.create(new Cell[0]); + return null; + } + }) + .when(primaryTable) + .batch(eq(ops), any(Object[].class)); + + withSamplingEnabled(false); mirroringTable.batch(ops); executorServiceRule.waitForExecutor(); - verify(readSampler, times(2)).shouldNextReadOperationBeSampled(); - verify(primaryTable, times(2)).batch(eq(ops), any(Object[].class)); - verify(secondaryTable, times(1)).batch(eq(ops), any(Object[].class)); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(1)).batch(eq(ops), any(Object[].class)); verify(secondaryTable, times(1)).batch(eq(ImmutableList.of(put)), any(Object[].class)); } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncResultScannerWrapper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncResultScannerWrapper.java index 6704916dca..2627ee7af4 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncResultScannerWrapper.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncResultScannerWrapper.java @@ -15,19 +15,13 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers; -import static com.google.common.truth.Truth.assertThat; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; import com.google.common.util.concurrent.MoreExecutors; -import com.google.common.util.concurrent.SettableFuture; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; import org.apache.hadoop.hbase.client.ResultScanner; -import org.apache.hadoop.hbase.client.Table; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; @@ -35,41 +29,15 @@ @RunWith(JUnit4.class) public class TestAsyncResultScannerWrapper { @Test - public void testListenersAreCalledOnClose() - throws InterruptedException, ExecutionException, TimeoutException { - Table table = mock(Table.class); + public void testAsyncResultScannerWrapperClosedTwiceClosesScannerOnce() { ResultScanner resultScanner = mock(ResultScanner.class); AsyncResultScannerWrapper asyncResultScannerWrapper = new AsyncResultScannerWrapper( - table, resultScanner, MoreExecutors.listeningDecorator(MoreExecutors.newDirectExecutorService()), new MirroringTracer()); - final SettableFuture listenerFuture = SettableFuture.create(); - asyncResultScannerWrapper.addOnCloseListener( - new Runnable() { - @Override - public void run() { - listenerFuture.set(null); - } - }); - asyncResultScannerWrapper.asyncClose().get(3, TimeUnit.SECONDS); - assertThat(listenerFuture.get(3, TimeUnit.SECONDS)).isNull(); - } - - @Test - public void testAsyncResultScannerWrapperClosedTwiceClosesScannerOnce() - throws InterruptedException, ExecutionException, TimeoutException { - Table table = mock(Table.class); - ResultScanner resultScanner = mock(ResultScanner.class); - AsyncResultScannerWrapper asyncResultScannerWrapper = - new AsyncResultScannerWrapper( - table, - resultScanner, - MoreExecutors.listeningDecorator(MoreExecutors.newDirectExecutorService()), - new MirroringTracer()); - asyncResultScannerWrapper.asyncClose().get(3, TimeUnit.SECONDS); - asyncResultScannerWrapper.asyncClose().get(3, TimeUnit.SECONDS); + asyncResultScannerWrapper.close(); + asyncResultScannerWrapper.close(); verify(resultScanner, times(1)).close(); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncTableWrapper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncTableWrapper.java index a87e12e175..7abddeb5e7 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncTableWrapper.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/asyncwrappers/TestAsyncTableWrapper.java @@ -22,9 +22,6 @@ import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; import com.google.common.util.concurrent.ListeningExecutorService; import java.io.IOException; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; import org.apache.hadoop.hbase.client.Table; import org.junit.Test; import org.junit.runner.RunWith; @@ -34,13 +31,12 @@ public class TestAsyncTableWrapper { @Test - public void testMultipleCloseCallsCloseOnTableOnlyOnce() - throws InterruptedException, ExecutionException, TimeoutException, IOException { + public void testMultipleCloseCallsCloseOnTableOnlyOnce() throws IOException { Table table = mock(Table.class); AsyncTableWrapper asyncTableWrapper = new AsyncTableWrapper(table, mock(ListeningExecutorService.class), new MirroringTracer()); - asyncTableWrapper.asyncClose().get(3, TimeUnit.SECONDS); - asyncTableWrapper.asyncClose().get(3, TimeUnit.SECONDS); + asyncTableWrapper.close(); + asyncTableWrapper.close(); verify(table, times(1)).close(); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutatorCommon.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutatorCommon.java new file mode 100644 index 0000000000..28eccb1d17 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/MirroringBufferedMutatorCommon.java @@ -0,0 +1,180 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerMock; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_PRIMARY_CONFIG_PREFIX_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; + +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConfiguration; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; +import com.google.cloud.bigtable.mirroring.hbase1_x.TestConnection; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; +import com.google.common.util.concurrent.SettableFuture; +import java.io.IOException; +import java.util.Arrays; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.BufferedMutator; +import org.apache.hadoop.hbase.client.BufferedMutatorParams; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.client.Row; +import org.mockito.ArgumentCaptor; +import org.mockito.invocation.InvocationOnMock; +import org.mockito.stubbing.Answer; + +public class MirroringBufferedMutatorCommon { + public final Connection primaryConnection = mock(Connection.class); + public final Connection secondaryConnection = mock(Connection.class); + public final FlowController flowController = mock(FlowController.class); + public final SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumerWithMetrics = + mock(SecondaryWriteErrorConsumerWithMetrics.class); + + public final BufferedMutatorParams bufferedMutatorParams = + new BufferedMutatorParams(TableName.valueOf("test1")); + + public final BufferedMutator primaryBufferedMutator = mock(BufferedMutator.class); + public final BufferedMutator secondaryBufferedMutator = mock(BufferedMutator.class); + + public final ArgumentCaptor primaryBufferedMutatorParamsCaptor; + public final ArgumentCaptor secondaryBufferedMutatorParamsCaptor; + + public final ResourceReservation resourceReservation; + + public final Mutation mutation1 = new Delete("key1".getBytes()); + public final Mutation mutation2 = new Delete("key2".getBytes()); + public final Mutation mutation3 = new Delete("key3".getBytes()); + public final Mutation mutation4 = new Delete("key4".getBytes()); + public final long mutationSize = mutation1.heapSize(); + + public MirroringBufferedMutatorCommon() { + this.primaryBufferedMutatorParamsCaptor = ArgumentCaptor.forClass(BufferedMutatorParams.class); + try { + doReturn(primaryBufferedMutator) + .when(primaryConnection) + .getBufferedMutator(primaryBufferedMutatorParamsCaptor.capture()); + } catch (IOException e) { + throw new RuntimeException(e); + } + + this.secondaryBufferedMutatorParamsCaptor = + ArgumentCaptor.forClass(BufferedMutatorParams.class); + try { + doReturn(secondaryBufferedMutator) + .when(secondaryConnection) + .getBufferedMutator(secondaryBufferedMutatorParamsCaptor.capture()); + } catch (IOException e) { + throw new RuntimeException(e); + } + + resourceReservation = setupFlowControllerMock(flowController); + } + + public static Answer mutateWithErrors( + final ArgumentCaptor argumentCaptor, + final BufferedMutator bufferedMutator, + final Mutation... failingMutations) { + return new Answer() { + @Override + public Void answer(InvocationOnMock invocationOnMock) throws Throwable { + List failingMutationsList = Arrays.asList(failingMutations); + List argument = invocationOnMock.getArgument(0); + for (Mutation m : argument) { + if (failingMutationsList.contains(m)) { + callErrorHandler(m, argumentCaptor, bufferedMutator); + } + } + return null; + } + }; + } + + public static Answer flushWithErrors( + final ArgumentCaptor argumentCaptor, + final BufferedMutator bufferedMutator, + final Mutation... failingMutations) { + return new Answer() { + @Override + public Void answer(InvocationOnMock invocationOnMock) throws Throwable { + for (Mutation m : failingMutations) { + callErrorHandler(m, argumentCaptor, bufferedMutator); + } + return null; + } + }; + } + + private static void callErrorHandler( + Mutation m, + ArgumentCaptor argumentCaptor, + BufferedMutator bufferedMutator) + throws RetriesExhaustedWithDetailsException { + argumentCaptor + .getValue() + .getListener() + .onException( + new RetriesExhaustedWithDetailsException( + Arrays.asList(new Throwable[] {new IOException()}), + Arrays.asList(new Row[] {m}), + Arrays.asList("invalid.example:1234")), + bufferedMutator); + } + + public static MirroringConfiguration makeConfigurationWithFlushThreshold(long flushThreshold) { + Configuration mirroringConfig = new Configuration(); + mirroringConfig.set( + "hbase.client.connection.impl", MirroringConnection.class.getCanonicalName()); + + mirroringConfig.set( + MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, TestConnection.class.getCanonicalName()); + mirroringConfig.set(MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, "prefix1"); + mirroringConfig.set( + MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, TestConnection.class.getCanonicalName()); + mirroringConfig.set(MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, "prefix2"); + mirroringConfig.set(MIRRORING_BUFFERED_MUTATOR_BYTES_TO_FLUSH, String.valueOf(flushThreshold)); + + return new MirroringConfiguration(mirroringConfig); + } + + public static Answer blockedFlushes( + final AtomicInteger ongoingFlushes, + final SettableFuture allFlushesStarted, + final SettableFuture endFlush, + final int expectedNumberOfFlushes) { + return new Answer() { + @Override + public Void answer(InvocationOnMock invocationOnMock) throws Throwable { + if (ongoingFlushes.incrementAndGet() == expectedNumberOfFlushes) { + allFlushesStarted.set(null); + } + endFlush.get(); + return null; + } + }; + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestConcurrentMirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestConcurrentMirroringBufferedMutator.java new file mode 100644 index 0000000000..e4eb9ef321 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestConcurrentMirroringBufferedMutator.java @@ -0,0 +1,411 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.blockMethodCall; +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.waitUntilCalled; +import static com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutatorCommon.flushWithErrors; +import static com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutatorCommon.makeConfigurationWithFlushThreshold; +import static com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutatorCommon.mutateWithErrors; +import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.atLeast; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOperationException.DatabaseIdentifier; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.common.util.concurrent.SettableFuture; +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.TimeoutException; +import org.apache.hadoop.hbase.client.BufferedMutator; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.client.Row; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.mockito.ArgumentMatchers; +import org.mockito.InOrder; +import org.mockito.Mockito; +import org.mockito.junit.MockitoJUnit; +import org.mockito.junit.MockitoRule; + +@RunWith(JUnit4.class) +public class TestConcurrentMirroringBufferedMutator { + @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); + + @Rule + public final ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + + Timestamper timestamper = new NoopTimestamper(); + + public final MirroringBufferedMutatorCommon common = new MirroringBufferedMutatorCommon(); + + private final List singletonMutation1 = Collections.singletonList(common.mutation1); + + @Test + public void testBufferedWritesWithoutErrors() throws IOException, InterruptedException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + verify(common.primaryBufferedMutator, times(1)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(1)).mutate(singletonMutation1); + bm.mutate(common.mutation1); + verify(common.primaryBufferedMutator, times(2)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(2)).mutate(singletonMutation1); + bm.mutate(common.mutation1); + verify(common.primaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(3)).mutate(singletonMutation1); + bm.mutate(common.mutation1); + executorServiceRule.waitForExecutor(); + verify(common.primaryBufferedMutator, times(4)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(4)).mutate(singletonMutation1); + verify(common.primaryBufferedMutator, times(1)).flush(); + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.flowController, never()) + .asyncRequestResource(any(RequestResourcesDescription.class)); + verify(common.resourceReservation, never()).release(); + } + + @Test + public void testBufferedMutatorFlush() throws IOException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.flush(); + executorServiceRule.waitForExecutor(); + verify(common.primaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.primaryBufferedMutator, times(1)).flush(); + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.flowController, never()) + .asyncRequestResource(any(RequestResourcesDescription.class)); + verify(common.resourceReservation, never()).release(); + } + + @Test + public void testCloseFlushesWrites() throws IOException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.close(); + verify(common.primaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.primaryBufferedMutator, times(1)).flush(); + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.flowController, never()) + .asyncRequestResource(any(RequestResourcesDescription.class)); + verify(common.resourceReservation, never()).release(); + } + + @Test + public void testCloseIsIdempotent() throws IOException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.close(); + bm.close(); + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.flowController, never()) + .asyncRequestResource(any(RequestResourcesDescription.class)); + verify(common.resourceReservation, never()).release(); + } + + @Test + public void testBlockedPrimaryFlushesDoNotPreventSecondaryAsyncFlushes() + throws IOException, TimeoutException { + final SettableFuture startPrimaryFlush = SettableFuture.create(); + final SettableFuture startSecondaryFlush = SettableFuture.create(); + + // We will block flushes on underlying mutators - when `flush()` will be called it will wait + // until corresponding future is completed. Blocked calls are also accounted as calls when using + // `verify(...).flush()` assertion. + blockMethodCall(common.primaryBufferedMutator, startPrimaryFlush).flush(); + blockMethodCall(common.secondaryBufferedMutator, startSecondaryFlush).flush(); + + // Set flush threshold to less than size of a mutation - flush should be scheduled after every + // `mutate` call. + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 0.5)); + + // First mutation - primary and secondary flushes should be scheduled and executed at some point + // in time. + bm.mutate(common.mutation1); + // Second mutation - primary and secondary flushes should be scheduled but won't be called while + // previous flushes are still running. + bm.mutate(common.mutation2); + + // Wait for first, blocking, flush calls to be called. + waitUntilCalled(common.primaryBufferedMutator, "flush", /* calls */ 1, /* timeout */ 1); + waitUntilCalled(common.secondaryBufferedMutator, "flush", /* calls */ 1, /* timeout */ 1); + + // And unlock primary flush. The secondary remains blocked. + startPrimaryFlush.set(null); + + // Primary flush should be called for the second time now. + waitUntilCalled(common.primaryBufferedMutator, "flush", /* calls */ 2, /* timeout */ 1); + // But secondary flush should still be blocked. + try { + waitUntilCalled(common.secondaryBufferedMutator, "flush", /* calls */ 2, /* timeout */ 2); + fail("should time out"); + } catch (TimeoutException expected) { + // expected + } + + verify(common.primaryBufferedMutator, times(2)).flush(); + verify(common.secondaryBufferedMutator, times(1)).flush(); + } + + @Test + public void testBlockedSecondaryFlushesDoNotPreventPrimaryAsyncFlushes() + throws IOException, TimeoutException { + final SettableFuture startPrimaryFlush = SettableFuture.create(); + final SettableFuture startSecondaryFlush = SettableFuture.create(); + + blockMethodCall(common.primaryBufferedMutator, startPrimaryFlush).flush(); + blockMethodCall(common.secondaryBufferedMutator, startSecondaryFlush).flush(); + + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 0.5)); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation2); + + waitUntilCalled(common.primaryBufferedMutator, "flush", /* calls */ 1, /* timeout */ 1); + waitUntilCalled(common.secondaryBufferedMutator, "flush", /* calls */ 1, /* timeout */ 1); + + startSecondaryFlush.set(null); + + waitUntilCalled(common.secondaryBufferedMutator, "flush", /* calls */ 2, /* timeout */ 1); + try { + waitUntilCalled(common.primaryBufferedMutator, "flush", /* calls */ 2, /* timeout */ 2); + fail("should time out"); + } catch (TimeoutException expected) { + // expected + } + + verify(common.secondaryBufferedMutator, times(2)).flush(); + verify(common.primaryBufferedMutator, times(1)).flush(); + } + + @Test + public void testFlushesCanBeScheduledSimultaneouslyAndAreExecutedInOrder() throws IOException { + final SettableFuture startFlush = SettableFuture.create(); + + blockMethodCall(common.primaryBufferedMutator, startFlush).flush(); + + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 0.5)); + + InOrder inOrder1 = Mockito.inOrder(common.primaryBufferedMutator); + InOrder inOrder2 = Mockito.inOrder(common.secondaryBufferedMutator); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation2); + bm.mutate(common.mutation3); + bm.mutate(common.mutation4); + startFlush.set(null); + executorServiceRule.waitForExecutor(); + + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation1)); + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation2)); + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation3)); + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation4)); + // First flush happened somewhere between mutation1 and now, others are guaranteed to be called + // after startFlush was set. + inOrder1.verify(common.primaryBufferedMutator, atLeast(3)).flush(); + + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation1)); + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation2)); + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation3)); + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation4)); + + verify(common.primaryBufferedMutator, times(4)).mutate(ArgumentMatchers.anyList()); + verify(common.secondaryBufferedMutator, times(4)).mutate(ArgumentMatchers.anyList()); + verify(common.secondaryBufferedMutator, times(4)).flush(); + verify(common.flowController, never()) + .asyncRequestResource(any(RequestResourcesDescription.class)); + verify(common.resourceReservation, never()).release(); + } + + @Test + public void testErrorsReportedByPrimaryDoNotPreventSecondaryWrites() throws IOException { + doAnswer( + mutateWithErrors( + this.common.primaryBufferedMutatorParamsCaptor, + common.primaryBufferedMutator, + common.mutation1, + common.mutation3)) + .when(common.primaryBufferedMutator) + .mutate(ArgumentMatchers.anyList()); + + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + try { + bm.mutate(common.mutation1); + } catch (IOException ignored) { + } + try { + bm.mutate(common.mutation2); + } catch (IOException ignored) { + } + try { + bm.mutate(common.mutation3); + } catch (IOException ignored) { + } + try { + bm.mutate(common.mutation4); + } catch (IOException ignored) { + } + executorServiceRule.waitForExecutor(); + verify(common.primaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation1)); + verify(common.secondaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation1)); + verify(common.primaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation2)); + verify(common.secondaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation2)); + verify(common.primaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation3)); + verify(common.secondaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation3)); + verify(common.primaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation4)); + verify(common.secondaryBufferedMutator, times(1)).mutate(Arrays.asList(common.mutation4)); + } + + @Test + public void testErrorsReportedBySecondaryAreReportedAsWriteErrors() throws IOException { + doAnswer( + mutateWithErrors( + this.common.secondaryBufferedMutatorParamsCaptor, + common.secondaryBufferedMutator, + common.mutation1, + common.mutation3)) + .when(common.secondaryBufferedMutator) + .mutate(ArgumentMatchers.anyList()); + + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate( + Arrays.asList(common.mutation1, common.mutation2, common.mutation3, common.mutation4)); + executorServiceRule.waitForExecutor(); + verify(common.secondaryBufferedMutator, times(1)) + .mutate( + Arrays.asList(common.mutation1, common.mutation2, common.mutation3, common.mutation4)); + + verify(common.secondaryWriteErrorConsumerWithMetrics, never()) + .consume(any(HBaseOperation.class), any(Row.class), any(Throwable.class)); + verify(common.secondaryWriteErrorConsumerWithMetrics, never()) + .consume(any(HBaseOperation.class), ArgumentMatchers.anyList(), any(Throwable.class)); + } + + @Test + public void testErrorsInBothPrimaryAndSecondary() throws IOException { + // | primary | secondary | + // m1 | x | v | + // m2 | v | v | + // m3 | x | x | + // m4 | v | x | + + BufferedMutator bm = getBufferedMutator(common.mutationSize * 10); + + doAnswer( + flushWithErrors( + this.common.primaryBufferedMutatorParamsCaptor, + common.primaryBufferedMutator, + common.mutation1, + common.mutation3)) + .when(common.primaryBufferedMutator) + .flush(); + doAnswer( + flushWithErrors( + this.common.secondaryBufferedMutatorParamsCaptor, + common.secondaryBufferedMutator, + common.mutation3, + common.mutation4)) + .when(common.secondaryBufferedMutator) + .flush(); + + List mutations = + Arrays.asList(common.mutation1, common.mutation2, common.mutation3, common.mutation4); + bm.mutate(mutations); + + // flush not called + verify(common.primaryBufferedMutator, never()).flush(); + verify(common.secondaryBufferedMutator, never()).flush(); + + try { + bm.flush(); + } catch (RetriesExhaustedWithDetailsException e) { + assertThat(e.getNumExceptions()).isEqualTo(3); + assertThat(e.getRow(0)).isEqualTo(common.mutation1); + assertThat(MirroringOperationException.extractRootCause(e.getCause(0))).isNotNull(); + assertThat(MirroringOperationException.extractRootCause(e.getCause(0)).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Primary); + + assertThat(e.getRow(1)).isEqualTo(common.mutation3); + assertThat(MirroringOperationException.extractRootCause(e.getCause(1))).isNotNull(); + assertThat(MirroringOperationException.extractRootCause(e.getCause(1)).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Both); + assertThat(MirroringOperationException.extractRootCause(e.getCause(1)).secondaryException) + .isNotNull(); + + assertThat(e.getRow(2)).isEqualTo(common.mutation4); + assertThat(MirroringOperationException.extractRootCause(e.getCause(2))).isNotNull(); + assertThat(MirroringOperationException.extractRootCause(e.getCause(2)).databaseIdentifier) + .isEqualTo(DatabaseIdentifier.Secondary); + } + + executorServiceRule.waitForExecutor(); + verify(common.secondaryBufferedMutator, times(1)).mutate(mutations); + verify(common.secondaryBufferedMutator, times(1)).mutate(mutations); + + verify(common.primaryBufferedMutator, times(1)).flush(); + verify(common.secondaryBufferedMutator, times(1)).flush(); + + verify(common.secondaryWriteErrorConsumerWithMetrics, never()) + .consume(any(HBaseOperation.class), any(Row.class), any(Throwable.class)); + + verify(common.secondaryWriteErrorConsumerWithMetrics, never()) + .consume(any(HBaseOperation.class), ArgumentMatchers.anyList(), any(Throwable.class)); + } + + private BufferedMutator getBufferedMutator(long flushThreshold) throws IOException { + return new ConcurrentMirroringBufferedMutator( + common.primaryConnection, + common.secondaryConnection, + common.bufferedMutatorParams, + makeConfigurationWithFlushThreshold(flushThreshold), + executorServiceRule.executorService, + mock(ReferenceCounter.class), + timestamper, + new MirroringTracer()); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestMirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestMirroringBufferedMutator.java new file mode 100644 index 0000000000..913da225a6 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestMirroringBufferedMutator.java @@ -0,0 +1,88 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator; + +import static com.google.common.truth.Truth.assertThat; +import static org.mockito.Mockito.mock; + +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConfiguration; +import com.google.cloud.bigtable.mirroring.hbase1_x.TestConnection; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import java.io.IOException; +import org.apache.hadoop.conf.Configuration; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.mockito.junit.MockitoJUnit; +import org.mockito.junit.MockitoRule; + +@RunWith(JUnit4.class) +public class TestMirroringBufferedMutator { + @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); + Timestamper timestamper = new NoopTimestamper(); + + @Rule + public final ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + + public final MirroringBufferedMutatorCommon mutatorRule = new MirroringBufferedMutatorCommon(); + + @Test + public void testMirroringBufferedMutatorFactory() throws IOException { + Configuration testConfiguration = new Configuration(false); + testConfiguration.set( + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, + TestConnection.class.getCanonicalName()); + testConfiguration.set( + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "default"); + MirroringConfiguration configuration = new MirroringConfiguration(testConfiguration); + + assertThat( + MirroringBufferedMutator.create( + false, + mutatorRule.primaryConnection, + mutatorRule.secondaryConnection, + mutatorRule.bufferedMutatorParams, + configuration, + mutatorRule.flowController, + executorServiceRule.executorService, + mutatorRule.secondaryWriteErrorConsumerWithMetrics, + mock(ReferenceCounter.class), + timestamper, + new MirroringTracer())) + .isInstanceOf(SequentialMirroringBufferedMutator.class); + + assertThat( + MirroringBufferedMutator.create( + true, + mutatorRule.primaryConnection, + mutatorRule.secondaryConnection, + mutatorRule.bufferedMutatorParams, + configuration, + mutatorRule.flowController, + executorServiceRule.executorService, + mutatorRule.secondaryWriteErrorConsumerWithMetrics, + mock(ReferenceCounter.class), + timestamper, + new MirroringTracer())) + .isInstanceOf(ConcurrentMirroringBufferedMutator.class); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestSequentialMirroringBufferedMutator.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestSequentialMirroringBufferedMutator.java new file mode 100644 index 0000000000..a070af0f47 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/bufferedmutator/TestSequentialMirroringBufferedMutator.java @@ -0,0 +1,388 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.blockMethodCall; +import static com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutatorCommon.makeConfigurationWithFlushThreshold; +import static com.google.cloud.bigtable.mirroring.hbase1_x.bufferedmutator.MirroringBufferedMutatorCommon.mutateWithErrors; +import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.common.primitives.Longs; +import com.google.common.util.concurrent.SettableFuture; +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicInteger; +import org.apache.hadoop.hbase.client.BufferedMutator; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.client.Row; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.mockito.ArgumentMatchers; +import org.mockito.InOrder; +import org.mockito.Mockito; +import org.mockito.invocation.InvocationOnMock; +import org.mockito.junit.MockitoJUnit; +import org.mockito.junit.MockitoRule; +import org.mockito.stubbing.Answer; + +@RunWith(JUnit4.class) +public class TestSequentialMirroringBufferedMutator { + @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); + Timestamper timestamper = new NoopTimestamper(); + + @Rule + public final ExecutorServiceRule executorServiceRule = + ExecutorServiceRule.spyedCachedPoolExecutor(); + + public final MirroringBufferedMutatorCommon common = new MirroringBufferedMutatorCommon(); + + private final List singletonMutation1 = Collections.singletonList(common.mutation1); + private ListenableReferenceCounter referenceCounter = new ListenableReferenceCounter(); + + @Test + public void testBufferedWritesWithoutErrors() throws IOException, InterruptedException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + verify(common.primaryBufferedMutator, times(1)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, never()).mutate(ArgumentMatchers.anyList()); + verify(common.secondaryBufferedMutator, never()).mutate(any(Mutation.class)); + bm.mutate(common.mutation1); + verify(common.primaryBufferedMutator, times(2)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, never()).mutate(ArgumentMatchers.anyList()); + verify(common.secondaryBufferedMutator, never()).mutate(any(Mutation.class)); + bm.mutate(common.mutation1); + verify(common.primaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, never()).mutate(ArgumentMatchers.anyList()); + verify(common.secondaryBufferedMutator, never()).mutate(any(Mutation.class)); + bm.mutate(common.mutation1); + executorServiceRule.waitForExecutor(); + verify(common.primaryBufferedMutator, times(4)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(1)) + .mutate( + Arrays.asList(common.mutation1, common.mutation1, common.mutation1, common.mutation1)); + verify(common.secondaryBufferedMutator, never()).mutate(any(Mutation.class)); + // Flush is called, and only called once, because we've reached the threshold exactly once. + // The threshold is set to 3.5 times mutation size when BufferedMutator is constructed above. + verify(common.primaryBufferedMutator, times(1)).flush(); + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.resourceReservation, times(4)).release(); + } + + @Test + public void testBufferedMutatorFlush() throws IOException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.flush(); + executorServiceRule.waitForExecutor(); + verify(common.primaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(1)) + .mutate(Arrays.asList(common.mutation1, common.mutation1, common.mutation1)); + verify(common.secondaryBufferedMutator, never()).mutate(any(Mutation.class)); + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.resourceReservation, times(3)).release(); + } + + @Test + public void testCloseFlushesWrites() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.close(); + verify(common.primaryBufferedMutator, times(3)).mutate(singletonMutation1); + verify(common.secondaryBufferedMutator, times(1)) + .mutate(Arrays.asList(common.mutation1, common.mutation1, common.mutation1)); + // close() waits until primary's flush() finishes and schedules secondary operation + verify(common.primaryBufferedMutator, times(1)).flush(); + // decrement initial reference value - now only asynchronous flush should hold a reference in + // reference counter. + referenceCounter.decrementReferenceCount(); + // wait until secondary flush is finished + referenceCounter.getOnLastReferenceClosed().get(3, TimeUnit.SECONDS); + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.resourceReservation, times(3)).release(); + } + + @Test + public void testCloseIsIdempotent() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.mutate(common.mutation1); + bm.close(); + bm.close(); + verify(common.primaryBufferedMutator, times(1)).flush(); + + // wait until secondary flush is finished + referenceCounter.decrementReferenceCount(); + referenceCounter.getOnLastReferenceClosed().get(3, TimeUnit.SECONDS); + + verify(common.secondaryBufferedMutator, times(1)).flush(); + verify(common.resourceReservation, times(3)).release(); + } + + @Test + public void testFlushesCanBeScheduledSimultaneouslyAndAreExecutedInOrder() throws IOException { + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 0.5)); + + final SettableFuture startFlush = SettableFuture.create(); + blockMethodCall(common.primaryBufferedMutator, startFlush).flush(); + + InOrder inOrder1 = Mockito.inOrder(common.primaryBufferedMutator); + InOrder inOrder2 = Mockito.inOrder(common.secondaryBufferedMutator); + + bm.mutate(common.mutation1); + bm.mutate(common.mutation2); + bm.mutate(common.mutation3); + bm.mutate(common.mutation4); + + startFlush.set(null); + executorServiceRule.waitForExecutor(); + + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation1)); + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation2)); + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation3)); + inOrder1.verify(common.primaryBufferedMutator).mutate(Arrays.asList(common.mutation4)); + + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation1)); + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation2)); + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation3)); + inOrder2.verify(common.secondaryBufferedMutator).mutate(Arrays.asList(common.mutation4)); + + verify(common.primaryBufferedMutator, times(4)).mutate(ArgumentMatchers.anyList()); + verify(common.secondaryBufferedMutator, times(4)).mutate(ArgumentMatchers.anyList()); + verify(common.resourceReservation, times(4)).release(); + } + + @Test + public void testErrorsReportedByPrimaryAreNotUsedBySecondary() throws IOException { + doAnswer( + mutateWithErrors( + this.common.primaryBufferedMutatorParamsCaptor, + common.primaryBufferedMutator, + common.mutation1, + common.mutation3)) + .when(common.primaryBufferedMutator) + .mutate(ArgumentMatchers.anyList()); + + BufferedMutator bm = getBufferedMutator((long) (common.mutationSize * 3.5)); + + try { + bm.mutate(common.mutation1); + } catch (IOException ignored) { + + } + bm.mutate(common.mutation2); + try { + bm.mutate(common.mutation3); + } catch (IOException ignored) { + + } + bm.mutate(common.mutation4); + executorServiceRule.waitForExecutor(); + verify(common.secondaryBufferedMutator, times(1)) + .mutate(Arrays.asList(common.mutation2, common.mutation4)); + } + + @Test + public void testPrimaryAsyncFlushExceptionIsReportedOnNextMutateCall() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + final Mutation[] mutations = + new Mutation[] { + new Delete(Longs.toByteArray(0)), + new Delete(Longs.toByteArray(1)), + new Delete(Longs.toByteArray(2)) + }; + + final SettableFuture flushesStarted = SettableFuture.create(); + final SettableFuture performFlush = SettableFuture.create(); + final AtomicInteger runningFlushes = new AtomicInteger(3); + + doAnswer( + new Answer() { + @Override + public Object answer(InvocationOnMock invocationOnMock) throws Throwable { + flushesStarted.set(null); + performFlush.get(); + int value = runningFlushes.decrementAndGet(); + + long id = Longs.fromByteArray(mutations[value].getRow()); + RetriesExhaustedWithDetailsException e = + new RetriesExhaustedWithDetailsException( + Arrays.asList((Throwable) new IOException(String.valueOf(id))), + Arrays.asList((Row) mutations[value]), + Arrays.asList("localhost:" + value)); + common + .primaryBufferedMutatorParamsCaptor + .getValue() + .getListener() + .onException(e, common.primaryBufferedMutator); + return null; + } + }) + .when(common.primaryBufferedMutator) + .flush(); + + final BufferedMutator bm = getBufferedMutator(1); + + // Wait until flush is started to ensure that flushes are scheduled in the same order + // as mutations. + bm.mutate(mutations[2]); + bm.mutate(mutations[1]); + bm.mutate(mutations[0]); + flushesStarted.get(1, TimeUnit.SECONDS); + performFlush.set(null); + + // It waits for ExecutorService by shutting it down synchronously. + // After this call we can't send new tasks to it. + executorServiceRule.waitForExecutor(); + doAnswer( + new Answer() { + @Override + public Object answer(InvocationOnMock invocationOnMock) throws Throwable { + return SettableFuture.create(); + } + }) + .when(executorServiceRule.executorService) + .submit(any(Callable.class)); + + verify(common.secondaryBufferedMutator, never()).flush(); + verify(common.resourceReservation, times(3)).release(); + + // We previously used mutate() and thus scheduled asynchronous mutations. + // In this scenario asynchronous flush() on primary threw an exception after mutate() returned. + // Because of that we throw an exception the next time mutate() is called. + try { + bm.mutate(mutations[0]); + verify(executorServiceRule.executorService, times(1)).submit(any(Callable.class)); + fail("Should have thrown"); + } catch (RetriesExhaustedWithDetailsException e) { + assertThat(e.getNumExceptions()).isEqualTo(3); + assertThat(Arrays.asList(e.getRow(0), e.getRow(1), e.getRow(2))) + .containsExactly(mutations[0], mutations[1], mutations[2]); + for (int i = 0; i < 3; i++) { + Row r = e.getRow(i); + long id = Longs.fromByteArray(r.getRow()); + assertThat(e.getCause(i).getMessage()).isEqualTo(String.valueOf(id)); + assertThat(e.getHostnamePort(i)).isEqualTo("localhost:" + id); + } + } + + verify(common.secondaryBufferedMutator, never()).flush(); + verify(common.resourceReservation, times(3)).release(); + } + + @Test + public void testCloseWaitsForOngoingFlushesOnPrimaryMutatorOnly() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + final List mutations = + Arrays.asList( + new Delete(Longs.toByteArray(0)), + new Delete(Longs.toByteArray(1)), + new Delete(Longs.toByteArray(2))); + + long mutationSize = mutations.get(0).heapSize(); + + final SettableFuture closeStarted = SettableFuture.create(); + final SettableFuture closeEnded = SettableFuture.create(); + final SettableFuture unlockPrimaryFlush = SettableFuture.create(); + + final BufferedMutator bm = getBufferedMutator((long) 4 * mutationSize); + + blockMethodCall(common.primaryBufferedMutator, unlockPrimaryFlush).flush(); + + bm.mutate(mutations); + + Thread t = + new Thread( + new Runnable() { + @Override + public void run() { + try { + closeStarted.set(null); + bm.close(); // calls flush + closeEnded.set(null); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + }); + t.start(); + closeStarted.get(1, TimeUnit.SECONDS); + + // best effort - we give the closing thread some time to run. + try { + closeEnded.get(1, TimeUnit.SECONDS); + fail("Should have thrown."); + } catch (TimeoutException ignored) { + } + + // primary flushes have completed + // only the flushed that was blocked. + verify(common.primaryBufferedMutator, times(1)).flush(); + // and secondary is not yet called + verify(common.secondaryBufferedMutator, times(0)).flush(); + assertThat(t.isAlive()).isTrue(); + + unlockPrimaryFlush.set(null); + closeEnded.get(3, TimeUnit.SECONDS); + t.join(1000); + assertThat(t.isAlive()).isFalse(); + } + + private BufferedMutator getBufferedMutator(long flushThreshold) throws IOException { + return new SequentialMirroringBufferedMutator( + common.primaryConnection, + common.secondaryConnection, + common.bufferedMutatorParams, + makeConfigurationWithFlushThreshold(flushThreshold), + common.flowController, + executorServiceRule.executorService, + common.secondaryWriteErrorConsumerWithMetrics, + this.referenceCounter, + timestamper, + new MirroringTracer()); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/TestDefaultMismatchDetector.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/TestDefaultMismatchDetector.java new file mode 100644 index 0000000000..9225a5afd8 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/TestDefaultMismatchDetector.java @@ -0,0 +1,69 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils; + +import static com.google.common.truth.Truth.assertThat; + +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.DefaultMismatchDetector.LazyBytesHexlifier; +import java.util.ArrayList; +import java.util.List; +import org.apache.hadoop.hbase.client.Get; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +@RunWith(JUnit4.class) +public class TestDefaultMismatchDetector { + @Test + public void testHexlifier() { + assertThat(new LazyBytesHexlifier(new byte[] {}, 0).toString()).isEqualTo(""); + assertThat(new LazyBytesHexlifier(new byte[] {1}, 0).toString()).isEqualTo(""); + assertThat(new LazyBytesHexlifier(new byte[] {1, 2}, 1).toString()).isEqualTo("01..."); + assertThat(new LazyBytesHexlifier(new byte[] {1, 2, 3}, 2).toString()).isEqualTo("01...03"); + assertThat( + new LazyBytesHexlifier(new byte[] {(byte) 0x00, (byte) 0x80, (byte) 0xFF}, 2) + .toString()) + .isEqualTo("00...FF"); + assertThat( + new LazyBytesHexlifier( + new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, 100) + .toString()) + .isEqualTo("0102030405060708090A0B0C0D0E0F10"); + assertThat( + new LazyBytesHexlifier( + new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, 5) + .toString()) + .isEqualTo("010203...0F10"); + } + + @Test + public void testListHexlifier() { + Get g1 = new Get(new byte[] {1, 2, 3, 4}); + Get g2 = new Get(new byte[] {(byte) 0xf1, (byte) 0xf2, (byte) 0xf3, (byte) 0xf4}); + + List list = new ArrayList<>(); + assertThat(LazyBytesHexlifier.listOfHexRows(list, 100).toString()).isEqualTo("[]"); + + list.add(g1); + assertThat(LazyBytesHexlifier.listOfHexRows(list, 100).toString()).isEqualTo("[01020304]"); + + list.add(g2); + assertThat(LazyBytesHexlifier.listOfHexRows(list, 100).toString()) + .isEqualTo("[01020304, F1F2F3F4]"); + assertThat(LazyBytesHexlifier.listOfHexRows(list, 2).toString()) + .isEqualTo("[01...04, F1...F4]"); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppenderTest.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppenderTest.java index 928054eeff..1c8b5579f7 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppenderTest.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/DefaultAppenderTest.java @@ -15,6 +15,7 @@ */ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog; +import static com.google.common.truth.Truth.assertThat; import static org.junit.Assert.*; import com.google.common.collect.Sets; @@ -25,9 +26,14 @@ import java.nio.file.Path; import java.nio.file.SimpleFileVisitor; import java.nio.file.attribute.BasicFileAttributes; +import java.text.SimpleDateFormat; import java.util.ArrayList; +import java.util.Date; import java.util.List; import java.util.Set; +import java.util.TimeZone; +import java.util.regex.Matcher; +import java.util.regex.Pattern; import org.junit.After; import org.junit.Before; import org.junit.Test; @@ -75,28 +81,50 @@ private List listLogFiles() throws IOException { return paths; } + private DefaultAppender createAppender() throws IOException { + final int maxBufferSize = 4096; // just an arbitrary value + final boolean dropOnOverflow = false; + return new DefaultAppender(tmpdir.resolve("test").toString(), maxBufferSize, dropOnOverflow); + } + @Test public void startupAndShutdown() throws Exception { - try (Appender appender = new DefaultAppender(tmpdir.resolve("test").toString(), 4096, false)) { + try (Appender appender = createAppender()) { appender.append("foo".getBytes(StandardCharsets.UTF_8)); } } @Test public void pathNamesHaveTimestampAndTid() throws Exception { - try (Appender appender = new DefaultAppender(tmpdir.resolve("test").toString(), 4096, false)) { + final SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd_HH-mm-ss.SSS"); + // File names should look like test.yyyy-MM-dd_HH-mm-ss.SSS.TID" + final Pattern filePattern = + Pattern.compile( + "test\\.(20[0-9]{2}-[0-9]{2}-[0-9]{2}_[0-9]{2}-[0-9]{2}-[0-9]{2}\\.[0-9]+)\\.([0-9]+)"); + final long thisThreadId = Thread.currentThread().getId(); + + Date beforeLogCreation = new Date(); + try (Appender appender = createAppender()) { appender.append("foo".getBytes(StandardCharsets.UTF_8)); } - try (Appender appender = new DefaultAppender(tmpdir.resolve("test").toString(), 4096, false)) { + try (Appender appender = createAppender()) { appender.append("bar".getBytes(StandardCharsets.UTF_8)); } + Date afterLogCreation = new Date(); List paths = listLogFiles(); assertEquals(2, paths.size()); + for (String path : paths) { - assertTrue( - path.matches( - "test\\.20[0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]-[0-9][0-9]-[0-9][0-9]\\.[0-9]+\\.[0-9]+")); + Matcher matcher = filePattern.matcher(path); + assertTrue(matcher.matches()); + String timestampStr = matcher.group(1); + + dateFormat.setTimeZone(TimeZone.getTimeZone("UTC")); + final Date timestamp = dateFormat.parse(timestampStr); + assertThat(timestamp).isAtLeast(beforeLogCreation); + assertThat(timestamp).isAtMost(afterLogCreation); + assertEquals(Long.parseLong(matcher.group(2)), thisThreadId); } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LoggerTest.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/FailedMutationLoggerTest.java similarity index 88% rename from bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LoggerTest.java rename to bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/FailedMutationLoggerTest.java index bfd51bc3f3..08689a7ad9 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LoggerTest.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/FailedMutationLoggerTest.java @@ -30,18 +30,19 @@ import org.mockito.junit.MockitoRule; @RunWith(JUnit4.class) -public class LoggerTest { +public class FailedMutationLoggerTest { @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); @Mock Serializer serializer; @Mock Appender appender; @Test public void mutationsAreSerializedAndAppended() throws Exception { - try (Logger logger = new Logger(appender, serializer)) { + try (FailedMutationLogger failedMutationLogger = + new FailedMutationLogger(appender, serializer)) { try { throw new RuntimeException("OMG!"); } catch (RuntimeException e) { - logger.mutationFailed(new Put(new byte[] {'r'}), e); + failedMutationLogger.mutationFailed(new Put(new byte[] {'r'}), e); } } verify(serializer, times(1)) diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBufferTest.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBufferTest.java index ceb75bfd8e..83321fa230 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBufferTest.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/faillog/LogBufferTest.java @@ -220,11 +220,10 @@ public void drainingClosedBufferReturnsContentsAndThenNull() throws InterruptedE assertNull(res.poll()); // The buffer should no longer block on `drain()`. Instead, it should return `null`. - res = buffer.drain(); - assertNull(res); + assertNull(buffer.drain()); } - @Test + @Test(timeout = 5000) public void closingUnblocksDrain() throws InterruptedException { final LogBuffer buffer = new LogBuffer(11, true); @@ -252,7 +251,7 @@ public void run() { thread.join(); } - @Test + @Test(timeout = 5000) public void closingUnblocksAppend() throws InterruptedException, ExecutionException { final LogBuffer buffer = new LogBuffer(3, false); final byte[] buf1 = new byte[] {0, 1, 2}; @@ -299,17 +298,19 @@ public void run() throws Throwable { @Test public void closureCauseIsReported() throws InterruptedException { final LogBuffer buffer = new LogBuffer(11, true); - buffer.closeWithCause(new IOException("foo")); - buffer.closeWithCause(new IOException("bar")); + // Close the buffer with an exception, effectively simulating the storage failing. + buffer.closeWithCause(new IOException("exception_1")); + // Verify that the first exception is not clobbered by further closures. + buffer.closeWithCause(new IOException("exception_2")); buffer.close(); final byte[] buf1 = new byte[] {0, 1, 2}; try { - buffer.append("foo".getBytes(StandardCharsets.UTF_8)); + buffer.append("failed_mutation".getBytes(StandardCharsets.UTF_8)); fail("IllegalStateException was expected."); } catch (IllegalStateException e) { Throwable cause = e.getCause(); assertNotNull(cause); - assertEquals("foo", cause.getMessage()); + assertEquals("exception_1", cause.getMessage()); } } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestFlowController.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestFlowController.java index ad29d01ecd..4bdf1880e5 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestFlowController.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestFlowController.java @@ -16,10 +16,13 @@ package com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol; import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.fail; +import static org.mockito.ArgumentMatchers.anyBoolean; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.never; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; import com.google.common.util.concurrent.ListenableFuture; @@ -35,32 +38,36 @@ @RunWith(JUnit4.class) public class TestFlowController { - static class Ledger implements SingleQueueFlowControlStrategy.Ledger { + static class SingleQueueTestLedger implements SingleQueueFlowControlStrategy.Ledger { public int numRequestInFlight = 0; public int canAcquireResourcesCallsCount = 0; public int maxInFlightRequests = 0; - public final int limit; + public final int limitInFlightRequests; public List acquireOrdering = new ArrayList<>(); public SettableFuture futureToNotifyWhenCanAcquireResourceIsCalled = null; - Ledger(int limit) { - this.limit = limit; + SingleQueueTestLedger(int limitInFlightRequests) { + this.limitInFlightRequests = limitInFlightRequests; } - @Override public boolean canAcquireResource(RequestResourcesDescription resource) { if (this.futureToNotifyWhenCanAcquireResourceIsCalled != null) { this.futureToNotifyWhenCanAcquireResourceIsCalled.set(null); } + this.canAcquireResourcesCallsCount += 1; - return this.numRequestInFlight < this.limit; + return this.numRequestInFlight < this.limitInFlightRequests; } @Override - public void accountAcquiredResource(RequestResourcesDescription resource) { - this.acquireOrdering.add(resource); - this.numRequestInFlight += 1; - this.maxInFlightRequests = Math.max(this.maxInFlightRequests, this.numRequestInFlight); + public boolean tryAcquireResource(RequestResourcesDescription resource) { + if (this.canAcquireResource(resource)) { + this.acquireOrdering.add(resource); + this.numRequestInFlight += 1; + this.maxInFlightRequests = Math.max(this.maxInFlightRequests, this.numRequestInFlight); + return true; + } + return false; } @Override @@ -74,8 +81,8 @@ public void testLockingAndUnlockingThreads() throws ExecutionException, InterruptedException, TimeoutException { // Mutex - Ledger ledger = new Ledger(1); - FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(ledger); + SingleQueueTestLedger testLedger = new SingleQueueTestLedger(1); + FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(testLedger); final FlowController fc = new FlowController(flowControlStrategy); final SettableFuture threadStarted = SettableFuture.create(); @@ -83,8 +90,7 @@ public void testLockingAndUnlockingThreads() RequestResourcesDescription description = createRequest(1); ResourceReservation resourceReservation = fc.asyncRequestResource(description).get(); - - int canAcquireCalls = ledger.canAcquireResourcesCallsCount; + assertThat(testLedger.canAcquireResourcesCallsCount).isEqualTo(1); Thread thread = new Thread() { @@ -106,11 +112,11 @@ public void run() { threadStarted.get(3, TimeUnit.SECONDS); Thread.sleep(300); assertThat(threadEnded.isDone()).isFalse(); - assertThat(ledger.canAcquireResourcesCallsCount).isEqualTo(canAcquireCalls + 1); + assertThat(testLedger.canAcquireResourcesCallsCount).isEqualTo(2); resourceReservation.release(); threadEnded.get(3, TimeUnit.SECONDS); - assertThat(ledger.canAcquireResourcesCallsCount).isEqualTo(canAcquireCalls + 2); + assertThat(testLedger.canAcquireResourcesCallsCount).isEqualTo(3); } private RequestResourcesDescription createRequest(int size) { @@ -118,25 +124,24 @@ private RequestResourcesDescription createRequest(int size) { } @Test - public void testLockingAndUnlockingOrdering() + public void testSingleQueueStrategyAllowsRequestsInOrder() throws ExecutionException, InterruptedException, TimeoutException { - Ledger ledger = new Ledger(2); - FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(ledger); + int limitRequestsInFlight = 2; + SingleQueueTestLedger testLedger = new SingleQueueTestLedger(limitRequestsInFlight); + FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(testLedger); final FlowController fc = new FlowController(flowControlStrategy); + // This FlowController limits number of requests in flight and admits them in order. - RequestResourcesDescription description1 = createRequest(2); - RequestResourcesDescription description2 = createRequest(1); - - // Critical section is full. - ResourceReservation reservation1 = fc.asyncRequestResource(description1).get(); - ResourceReservation reservation2 = fc.asyncRequestResource(description2).get(); + ResourceReservation reservation1 = fc.asyncRequestResource(createRequest(2)).get(); + ResourceReservation reservation2 = fc.asyncRequestResource(createRequest(1)).get(); + // Maximal number of requests in flight has been reached. final int numThreads = 1000; List threads = new ArrayList<>(); for (int threadId = 0; threadId < numThreads; threadId++) { - final SettableFuture threadStarted = SettableFuture.create(); - ledger.futureToNotifyWhenCanAcquireResourceIsCalled = threadStarted; + final SettableFuture threadBlockedOnAcquiringResources = SettableFuture.create(); + testLedger.futureToNotifyWhenCanAcquireResourceIsCalled = threadBlockedOnAcquiringResources; final int finalThreadId = threadId; Thread thread = @@ -147,6 +152,7 @@ public void run() { RequestResourcesDescription r = createRequest(finalThreadId); fc.asyncRequestResource(r).get().release(); } catch (InterruptedException | ExecutionException ignored) { + fail("shouldn't have thrown"); } } }; @@ -155,8 +161,10 @@ public void run() { thread.start(); threads.add(thread); - // Wait until `fc.acquire` is called before starting next thread to ensure ordering. - threadStarted.get(3, TimeUnit.SECONDS); + // We want to check that our threads are given resources in order they asked for them, so to + // have a well-defined order we can only have one thread running before it blocks on future + // received from FlowController. + threadBlockedOnAcquiringResources.get(3, TimeUnit.SECONDS); } reservation1.release(); @@ -166,21 +174,21 @@ public void run() { t.join(); } - assertThat(ledger.acquireOrdering).hasSize(numThreads + 2); // + 2 initial entries - assertThat(ledger.maxInFlightRequests).isEqualTo(2); + assertThat(testLedger.acquireOrdering).hasSize(numThreads + 2); // + 2 initial entries + assertThat(testLedger.maxInFlightRequests).isEqualTo(limitRequestsInFlight); - for (int i = 0; i < ledger.acquireOrdering.size(); i++) { - RequestResourcesDescription d = ledger.acquireOrdering.get(i); + for (int i = 0; i < testLedger.acquireOrdering.size(); i++) { + RequestResourcesDescription resourceDescriptor = testLedger.acquireOrdering.get(i); int expectedValue = Math.abs(i - 2); - assertThat(d.numberOfResults).isEqualTo(expectedValue); + assertThat(resourceDescriptor.numberOfResults).isEqualTo(expectedValue); } } @Test public void testCancelledReservationFutureIsRemovedFromFlowControllerWaitersList() throws ExecutionException, InterruptedException, TimeoutException { - Ledger ledger = new Ledger(1); - FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(ledger); + SingleQueueTestLedger testLedger = new SingleQueueTestLedger(1); + FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(testLedger); final FlowController fc = new FlowController(flowControlStrategy); RequestResourcesDescription description = createRequest(1); @@ -193,11 +201,12 @@ public void testCancelledReservationFutureIsRemovedFromFlowControllerWaitersList ListenableFuture reservationFuture2 = fc.asyncRequestResource(createRequest(1)); - // Current thread is in critical section, future is first in the queue, thread is second. + // Current thread is in critical section, futures are queued. + // reservationFuture1 is first and reservationFuture2 is second. assertThat(reservationFuture1.cancel(true)).isTrue(); Thread.sleep(300); - assertThat(ledger.maxInFlightRequests).isEqualTo(1); + assertThat(testLedger.maxInFlightRequests).isEqualTo(1); // Releasing the resource should allow second future. resourceReservation.release(); @@ -207,8 +216,8 @@ public void testCancelledReservationFutureIsRemovedFromFlowControllerWaitersList @Test public void testCancellingGrantedReservationIsNotSuccessful() throws ExecutionException, InterruptedException { - Ledger ledger = new Ledger(1); - FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(ledger); + SingleQueueTestLedger testLedger = new SingleQueueTestLedger(1); + FlowControlStrategy flowControlStrategy = new SingleQueueFlowControlStrategy(testLedger); final FlowController fc = new FlowController(flowControlStrategy); ListenableFuture reservationFuture1 = @@ -229,12 +238,16 @@ public void testCancellingGrantedReservationFuture() { } @Test - public void testCancellingPendingReservationFuture() { - ResourceReservation reservation = mock(ResourceReservation.class); - SettableFuture grantedFuture = SettableFuture.create(); + public void testCancellingPendingReservationFuture() + throws ExecutionException, InterruptedException { + ExecutionException flowControllerException = + new ExecutionException(new Exception("FlowController rejected request")); - FlowController.cancelRequest(grantedFuture); - verify(reservation, never()).release(); + ListenableFuture pendingFuture = mock(ListenableFuture.class); + when(pendingFuture.cancel(anyBoolean())).thenReturn(false); + when(pendingFuture.get()).thenThrow(flowControllerException); + + FlowController.cancelRequest(pendingFuture); } @Test diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestCountingFlowControlStrategy.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestCountingFlowControlStrategy.java index e3982ee9c8..0f988679d9 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestCountingFlowControlStrategy.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestCountingFlowControlStrategy.java @@ -25,7 +25,7 @@ public class TestRequestCountingFlowControlStrategy { @Test public void testBlockingWhenCounterReachesTheLimit() { - RequestCountingFlowControlStrategy fc = new RequestCountingFlowControlStrategy(2); + RequestCountingFlowControlStrategy fc = new RequestCountingFlowControlStrategy(2, 1000); assertThat(fc.tryAcquireResource(new RequestResourcesDescription(new boolean[] {}))) .isTrue(); // 0 @@ -44,7 +44,7 @@ public void testBlockingWhenCounterReachesTheLimit() { @Test public void testOversizedRequestIsAllowedIfNoOtherResourcesAreAcquired() { - RequestCountingFlowControlStrategy fc = new RequestCountingFlowControlStrategy(2); + RequestCountingFlowControlStrategy fc = new RequestCountingFlowControlStrategy(2, 1000); assertThat( fc.tryAcquireResource( @@ -65,4 +65,23 @@ public void testOversizedRequestIsAllowedIfNoOtherResourcesAreAcquired() { new RequestResourcesDescription(new boolean[] {true, true, true}))) .isTrue(); } + + @Test + public void testBlockingOnRequestSize() { + RequestCountingFlowControlStrategy fc = new RequestCountingFlowControlStrategy(1000, 16); + + assertThat(fc.tryAcquireResource(new RequestResourcesDescription(new boolean[] {}))) + .isTrue(); // 0 + assertThat(fc.tryAcquireResource(new RequestResourcesDescription(new boolean[] {true}))) + .isTrue(); // 8 + assertThat(fc.tryAcquireResource(new RequestResourcesDescription(new boolean[] {true}))) + .isTrue(); // 16 + assertThat(fc.tryAcquireResource(new RequestResourcesDescription(new boolean[] {true}))) + .isFalse(); // 16 + fc.releaseResource(new RequestResourcesDescription(new boolean[] {true})); // 8 + assertThat(fc.tryAcquireResource(new RequestResourcesDescription(new boolean[] {true}))) + .isTrue(); // 16 + assertThat(fc.tryAcquireResource(new RequestResourcesDescription(new boolean[] {true}))) + .isFalse(); // 16 + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestResourcesDescription.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestResourcesDescription.java index 6be2d13b51..57b13ec946 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestResourcesDescription.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/flowcontrol/TestRequestResourcesDescription.java @@ -19,10 +19,14 @@ import java.io.IOException; import java.util.Arrays; +import java.util.List; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue.Type; import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.RowMutations; import org.junit.Test; @@ -39,7 +43,7 @@ public void testCalculatingSize() throws IOException { // 2 aligned to 8 assertThat(new RequestResourcesDescription(new boolean[] {true, false}).sizeInBytes) .isEqualTo(8); - // 9 aligned to 8 + // 9 aligned to 16 assertThat(new RequestResourcesDescription(new boolean[9]).sizeInBytes).isEqualTo(16); Cell c1 = CellUtil.createCell( @@ -97,4 +101,37 @@ public void testCalculatingSize() throws IOException { assertThat(new RequestResourcesDescription(Arrays.asList(delete, delete2)).sizeInBytes) .isAtLeast(15); // 14 bytes of data + some overhead } + + @Test + public void testCountingSimpleRequests() throws IOException { + boolean bool = true; + boolean[] boolArray = new boolean[] {true, false}; + Result result = Result.create(new Cell[0]); + Result[] resultArray = new Result[] {Result.create(new Cell[0]), Result.create(new Cell[0])}; + Mutation mutation = new Put("r1".getBytes()); + List mutationList = + Arrays.asList(new Put("r1".getBytes()), new Delete("r2".getBytes())); + + List readOperations = Arrays.asList(new Get("r1".getBytes()), new Get("r2".getBytes())); + Result[] successfulReadResults = resultArray; + + RowMutations rowMutations = new RowMutations("r1".getBytes()); + rowMutations.add(new Put("r1".getBytes())); + rowMutations.add(new Delete("r1".getBytes())); + + assertThat(new RequestResourcesDescription(bool).numberOfResults).isEqualTo(1); + assertThat(new RequestResourcesDescription(boolArray).numberOfResults).isEqualTo(2); + + assertThat(new RequestResourcesDescription(result).numberOfResults).isEqualTo(1); + assertThat(new RequestResourcesDescription(resultArray).numberOfResults).isEqualTo(2); + + assertThat(new RequestResourcesDescription(mutation).numberOfResults).isEqualTo(1); + assertThat(new RequestResourcesDescription(mutationList).numberOfResults).isEqualTo(2); + + assertThat(new RequestResourcesDescription(rowMutations).numberOfResults).isEqualTo(1); + + assertThat( + new RequestResourcesDescription(readOperations, successfulReadResults).numberOfResults) + .isEqualTo(2); + } } diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/TestListenableReferenceCounter.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/TestListenableReferenceCounter.java similarity index 89% rename from bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/TestListenableReferenceCounter.java rename to bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/TestListenableReferenceCounter.java index 5b50b85428..688850f712 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/TestListenableReferenceCounter.java +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/referencecounting/TestListenableReferenceCounter.java @@ -13,8 +13,9 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package com.google.cloud.bigtable.mirroring.hbase1_x.utils; +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounterUtils.holdReferenceUntilCompletion; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.never; import static org.mockito.Mockito.spy; @@ -54,7 +55,7 @@ public void testCounterIsDecrementedWhenReferenceIsDone() { ListenableReferenceCounter listenableReferenceCounter = spy(new ListenableReferenceCounter()); SettableFuture future = SettableFuture.create(); verify(listenableReferenceCounter, never()).incrementReferenceCount(); - listenableReferenceCounter.holdReferenceUntilCompletion(future); + holdReferenceUntilCompletion(listenableReferenceCounter, future); verify(listenableReferenceCounter, times(1)).incrementReferenceCount(); verify(listenableReferenceCounter, never()).decrementReferenceCount(); future.set(null); @@ -67,7 +68,7 @@ public void testCounterIsDecrementedWhenReferenceThrowsException() { ListenableReferenceCounter listenableReferenceCounter = spy(new ListenableReferenceCounter()); SettableFuture future = SettableFuture.create(); verify(listenableReferenceCounter, never()).incrementReferenceCount(); - listenableReferenceCounter.holdReferenceUntilCompletion(future); + holdReferenceUntilCompletion(listenableReferenceCounter, future); verify(listenableReferenceCounter, times(1)).incrementReferenceCount(); verify(listenableReferenceCounter, never()).decrementReferenceCount(); future.setException(new Exception("expected")); diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestCopyingTimestamper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestCopyingTimestamper.java new file mode 100644 index 0000000000..08af7c6438 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestCopyingTimestamper.java @@ -0,0 +1,190 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.TestInPlaceTimestamper.getCell; +import static com.google.common.truth.Truth.assertThat; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.List; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +@RunWith(JUnit4.class) +public class TestCopyingTimestamper { + @Test + public void testFillingPutTimestamps() throws IOException { + Put inputPut = new Put("row".getBytes(StandardCharsets.UTF_8)); + inputPut.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + inputPut.addColumn("f1".getBytes(), "q2".getBytes(), 123L, "v".getBytes()); + inputPut.addImmutable("f2".getBytes(), "q1".getBytes(), "v".getBytes()); + inputPut.addImmutable("f2".getBytes(), "q2".getBytes(), 123L, "v".getBytes()); + + long timestampBefore = System.currentTimeMillis(); + Put resultPut = new CopyingTimestamper().fillTimestamp(inputPut); + long timestampAfter = System.currentTimeMillis(); + + // Input is not modified + assertThat(getCell(inputPut, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputPut, 1).getTimestamp()).isEqualTo(123L); + assertThat(getCell(inputPut, 2).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputPut, 3).getTimestamp()).isEqualTo(123L); + + // Result has assigned timestamps + assertThat(getCell(resultPut, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(resultPut, 0).getTimestamp()).isAtMost(timestampAfter); + + assertThat(getCell(resultPut, 1).getTimestamp()).isEqualTo(123L); + + assertThat(getCell(resultPut, 2).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(resultPut, 2).getTimestamp()).isAtMost(timestampAfter); + + assertThat(resultPut.get("f2".getBytes(), "q2".getBytes()).get(0).getTimestamp()) + .isEqualTo(123L); + } + + @Test + public void testFillingRowMutationsTimestamps() throws IOException { + RowMutations inputRowMutations = new RowMutations("row".getBytes()); + Put inputPut = new Put("row".getBytes(StandardCharsets.UTF_8)); + inputPut.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + inputPut.addColumn("f1".getBytes(), "q2".getBytes(), 123L, "v".getBytes()); + inputRowMutations.add(inputPut); + + Delete inputDelete = new Delete("row".getBytes(StandardCharsets.UTF_8)); + inputDelete.addColumn("f1".getBytes(), "q1".getBytes()); + inputDelete.addColumn("f1".getBytes(), "q2".getBytes(), 123L); + inputRowMutations.add(inputDelete); + + long timestampBefore = System.currentTimeMillis(); + RowMutations rm = new CopyingTimestamper().fillTimestamp(inputRowMutations); + long timestampAfter = System.currentTimeMillis(); + + // Input is not modified. + assertThat(getCell(inputRowMutations, 0, 0).getTimestamp()) + .isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputRowMutations, 0, 1).getTimestamp()).isEqualTo(123L); + + assertThat(getCell(inputRowMutations, 1, 0).getTimestamp()) + .isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputRowMutations, 1, 1).getTimestamp()).isEqualTo(123L); + + // Result has assigned timestamps. + assertThat(getCell(rm, 0, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(rm, 0, 0).getTimestamp()).isAtMost(timestampAfter); + assertThat(getCell(rm, 0, 1).getTimestamp()).isEqualTo(123L); + + assertThat(getCell(rm, 1, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(rm, 1, 1).getTimestamp()).isEqualTo(123L); + } + + @Test + public void testFillingListOfMutations() throws IOException { + Put inputPut = new Put("row".getBytes(StandardCharsets.UTF_8)); + inputPut.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + + Delete inputDelete = new Delete("row".getBytes(StandardCharsets.UTF_8)); + inputDelete.addColumn("f1".getBytes(), "q1".getBytes()); + + Increment inputIncrement = new Increment("row".getBytes(StandardCharsets.UTF_8)); + inputIncrement.addColumn("f1".getBytes(), "q1".getBytes(), 1); + + Append inputAppend = new Append("row".getBytes(StandardCharsets.UTF_8)); + inputAppend.add("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + + RowMutations inputRowMutations = new RowMutations("row".getBytes()); + Put inputRowMutationsPut = new Put("row".getBytes()); + inputRowMutationsPut.addColumn("f1".getBytes(), "q2".getBytes(), "v".getBytes()); + inputRowMutations.add(inputRowMutationsPut); + + Delete inputRowMutationsDelete = new Delete("row".getBytes()); + inputRowMutationsDelete.addColumn("f1".getBytes(), "q2".getBytes()); + inputRowMutations.add(inputRowMutationsDelete); + + long timestampBefore = System.currentTimeMillis(); + List result = + new CopyingTimestamper() + .fillTimestamp( + Arrays.asList( + inputPut, inputDelete, inputIncrement, inputAppend, inputRowMutations)); + long timestampAfter = System.currentTimeMillis(); + + // Input is not modified + assertThat(getCell(inputPut, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputDelete, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputIncrement, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputAppend, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputRowMutations, 0, 0).getTimestamp()) + .isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputRowMutations, 1, 0).getTimestamp()) + .isEqualTo(HConstants.LATEST_TIMESTAMP); + + // Result has assigned timestamps + Put resultPut = (Put) result.get(0); + Delete resultDelete = (Delete) result.get(1); + Increment resultIncrement = (Increment) result.get(2); + Append resultAppend = (Append) result.get(3); + RowMutations resultRowMutations = (RowMutations) result.get(4); + + assertThat(getCell(resultPut, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(resultPut, 0).getTimestamp()).isAtMost(timestampAfter); + assertThat(getCell(resultDelete, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(resultIncrement, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(resultAppend, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(resultRowMutations, 0, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(resultRowMutations, 0, 0).getTimestamp()).isAtMost(timestampAfter); + assertThat(getCell(resultRowMutations, 1, 0).getTimestamp()) + .isEqualTo(HConstants.LATEST_TIMESTAMP); + } + + @Test + public void testFillingListOfRowMutations() throws IOException { + Put p = new Put("row".getBytes(StandardCharsets.UTF_8)); + p.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + + Delete d = new Delete("row".getBytes(StandardCharsets.UTF_8)); + d.addColumn("f1".getBytes(), "q1".getBytes()); + + RowMutations rm = new RowMutations("row".getBytes()); + rm.add(p); + rm.add(d); + + long timestampBefore = System.currentTimeMillis(); + List result = new CopyingTimestamper().fillTimestamp(Arrays.asList(rm)); + long timestampAfter = System.currentTimeMillis(); + + RowMutations resultRowMutations = result.get(0); + + assertThat(getCell(rm, 0, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(rm, 1, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + + assertThat(getCell(resultRowMutations, 0, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(resultRowMutations, 0, 0).getTimestamp()).isAtMost(timestampAfter); + + assertThat(getCell(resultRowMutations, 1, 0).getTimestamp()) + .isEqualTo(HConstants.LATEST_TIMESTAMP); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestInPlaceTimestamper.java b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestInPlaceTimestamper.java new file mode 100644 index 0000000000..85ce9f2bd4 --- /dev/null +++ b/bigtable-hbase-mirroring-client-1.x-parent/bigtable-hbase-mirroring-client-1.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase1_x/utils/timestamper/TestInPlaceTimestamper.java @@ -0,0 +1,190 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper; + +import static com.google.common.truth.Truth.assertThat; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.List; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellScanner; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +@RunWith(JUnit4.class) +public class TestInPlaceTimestamper { + @Test + public void testFillingPutTimestamps() throws IOException { + Put inputPut = new Put("row".getBytes(StandardCharsets.UTF_8)); + inputPut.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + inputPut.addColumn("f1".getBytes(), "q2".getBytes(), 123L, "v".getBytes()); + inputPut.addImmutable("f2".getBytes(), "q1".getBytes(), "v".getBytes()); + inputPut.addImmutable("f2".getBytes(), "q2".getBytes(), 123L, "v".getBytes()); + + long timestampBefore = System.currentTimeMillis(); + Put outputPut = new InPlaceTimestamper().fillTimestamp(inputPut); + long timestampAfter = System.currentTimeMillis(); + + assertRowEquals(outputPut, inputPut); + + assertThat(getCell(inputPut, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(inputPut, 0).getTimestamp()).isAtMost(timestampAfter); + + assertThat(getCell(inputPut, 1).getTimestamp()).isEqualTo(123L); + + assertThat(getCell(inputPut, 2).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(inputPut, 2).getTimestamp()).isAtMost(timestampAfter); + + assertThat(getCell(inputPut, 3).getTimestamp()).isEqualTo(123L); + } + + @Test + public void testFillingRowMutationsTimestamps() throws IOException { + RowMutations inputRowMutations = new RowMutations("row".getBytes()); + Put inputPut = new Put("row".getBytes(StandardCharsets.UTF_8)); + inputPut.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + inputPut.addColumn("f1".getBytes(), "q2".getBytes(), 123L, "v".getBytes()); + inputRowMutations.add(inputPut); + + Delete inputDelete = new Delete("row".getBytes(StandardCharsets.UTF_8)); + inputDelete.addColumn("f1".getBytes(), "q1".getBytes()); + inputDelete.addColumn("f1".getBytes(), "q2".getBytes(), 123L); + inputRowMutations.add(inputDelete); + + long timestampBefore = System.currentTimeMillis(); + RowMutations result = new InPlaceTimestamper().fillTimestamp(inputRowMutations); + long timestampAfter = System.currentTimeMillis(); + + assertRowEquals(inputRowMutations, result); + + assertThat(getCell(inputRowMutations, 0, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(inputRowMutations, 0, 0).getTimestamp()).isAtMost(timestampAfter); + assertThat(getCell(inputRowMutations, 0, 1).getTimestamp()).isEqualTo(123L); + + assertThat(getCell(inputRowMutations, 1, 0).getTimestamp()) + .isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputRowMutations, 1, 1).getTimestamp()).isEqualTo(123L); + } + + @Test + public void testFillingListOfMutations() throws IOException { + Put inputPut = new Put("row".getBytes(StandardCharsets.UTF_8)); + inputPut.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + + Delete inputDelete = new Delete("row".getBytes(StandardCharsets.UTF_8)); + inputDelete.addColumn("f1".getBytes(), "q1".getBytes()); + + Increment inputIncrement = new Increment("row".getBytes(StandardCharsets.UTF_8)); + inputIncrement.addColumn("f1".getBytes(), "q1".getBytes(), 1); + + Append inputAppend = new Append("row".getBytes(StandardCharsets.UTF_8)); + inputAppend.add("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + + RowMutations inputRowMutations = new RowMutations("row".getBytes()); + + List list = + Arrays.asList(inputPut, inputDelete, inputIncrement, inputAppend, inputRowMutations); + + long timestampBefore = System.currentTimeMillis(); + List result = new InPlaceTimestamper().fillTimestamp(list); + long timestampAfter = System.currentTimeMillis(); + + assertRecursiveEquals(list, result); + + assertThat(getCell(inputPut, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(inputPut, 0).getTimestamp()).isAtMost(timestampAfter); + + assertThat(getCell(inputDelete, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputIncrement, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + assertThat(getCell(inputAppend, 0).getTimestamp()).isEqualTo(HConstants.LATEST_TIMESTAMP); + } + + @Test + public void testFillingListOfRowMutations() throws IOException { + Put inputPut = new Put("row".getBytes(StandardCharsets.UTF_8)); + inputPut.addColumn("f1".getBytes(), "q1".getBytes(), "v".getBytes()); + + Delete inputDelete = new Delete("row".getBytes(StandardCharsets.UTF_8)); + inputDelete.addColumn("f1".getBytes(), "q1".getBytes()); + + RowMutations inputRowMutations = new RowMutations("row".getBytes()); + inputRowMutations.add(inputPut); + inputRowMutations.add(inputDelete); + + List input = Arrays.asList(inputRowMutations); + + long timestampBefore = System.currentTimeMillis(); + List result = new InPlaceTimestamper().fillTimestamp(input); + long timestampAfter = System.currentTimeMillis(); + + assertThat(result).isEqualTo(input); + assertRowMutationsEqual(result.get(0), input.get(0)); + + assertThat(getCell(inputPut, 0).getTimestamp()).isAtLeast(timestampBefore); + assertThat(getCell(inputPut, 0).getTimestamp()).isAtMost(timestampAfter); + } + + private void assertRecursiveEquals(List list, List result) { + assertThat(list).isEqualTo(result); + for (int i = 0; i < list.size(); i++) { + assertRowEquals(list.get(0), result.get(0)); + } + } + + private void assertRowEquals(Row row, Row row1) { + assertThat(row).isEqualTo(row1); + if (row instanceof RowMutations) { + assertRowMutationsEqual((RowMutations) row, (RowMutations) row1); + } + } + + private void assertRowMutationsEqual(RowMutations rm, RowMutations result) { + assertThat(rm).isEqualTo(result); + assertThat(rm.getMutations()).isEqualTo(result.getMutations()); + for (int i = 0; i < rm.getMutations().size(); i++) { + assertThat(rm.getMutations().get(i)).isEqualTo(result.getMutations().get(i)); + } + } + + public static Cell getCell(Mutation m, int id) throws IOException { + CellScanner cs = m.cellScanner(); + assertThat(cs.advance()).isTrue(); + for (int i = 0; i < id; i++) { + assertThat(cs.advance()).isTrue(); + } + return cs.current(); + } + + public static Cell getCell(RowMutations m, int mutationId, int id) throws IOException { + CellScanner cs = m.getMutations().get(mutationId).cellScanner(); + assertThat(cs.advance()).isTrue(); + for (int i = 0; i < id; i++) { + assertThat(cs.advance()).isTrue(); + } + return cs.current(); + } +} diff --git a/bigtable-hbase-mirroring-client-1.x-parent/pom.xml b/bigtable-hbase-mirroring-client-1.x-parent/pom.xml index 15db1e62a5..720b62fe17 100644 --- a/bigtable-hbase-mirroring-client-1.x-parent/pom.xml +++ b/bigtable-hbase-mirroring-client-1.x-parent/pom.xml @@ -47,6 +47,14 @@ limitations under the License. + + org.apache.maven.plugins + maven-compiler-plugin + + ${compileSource.1.8} + ${compileSource.1.8} + + org.apache.maven.plugins maven-javadoc-plugin diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/pom.xml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/pom.xml new file mode 100644 index 0000000000..e34dc7504b --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/pom.xml @@ -0,0 +1,441 @@ + + + + 4.0.0 + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-2.x-parent + 2.0.0-alpha2-SNAPSHOT + + + bigtable-hbase-mirroring-client-1.x-2.x-integration-tests + jar + ${project.groupId}:${project.artifactId} + + This project contains test cases that ought to work for either bigtable-hbase or hbase proper. + + + + ${hbase2.version} + com.google.cloud.bigtable.hbase2_x.BigtableConnection + 1800 + + + + + HBase12ToBigtableLocalIntegrationTests + + + + ${project.groupId} + bigtable-emulator-maven-plugin + 2.0.0-alpha2-SNAPSHOT + + + + start + stop + + + bigtable.emulator.endpoint + + + + + + + org.apache.maven.plugins + maven-failsafe-plugin + + + integration-tests + + integration-test + verify + + + + **/IntegrationTests.java + + + + hbase-to-bigtable-local-configuration.xml + com.google.cloud.bigtable.hbase.mirroring.utils.compat.TableCreator2x + org.apache.hadoop.hbase.regionserver.FailingHBaseHRegion2 + + + + ${bigtable.emulator.endpoint} + + + true + + + 1 + ${test.timeout} + + + true + + + + ${project.build.directory}/failsafe-reports/integration-tests/failsafe-summary.xml + + ${project.build.directory}/failsafe-reports/integration-tests + + + + + + + + + + BigtableToHBase12LocalIntegrationTests + + + + ${project.groupId} + bigtable-emulator-maven-plugin + 2.0.0-alpha2-SNAPSHOT + + + + start + stop + + + bigtable.emulator.endpoint + + + + + + + org.apache.maven.plugins + maven-failsafe-plugin + + + integration-tests + + integration-test + verify + + + + **/IntegrationTests.java + + + + bigtable-to-hbase-local-configuration.xml + com.google.cloud.bigtable.hbase.mirroring.utils.compat.TableCreator2x + org.apache.hadoop.hbase.regionserver.FailingHBaseHRegion2 + + + + ${bigtable.emulator.endpoint} + + + true + + + 1 + ${test.timeout} + + + true + + + + ${project.build.directory}/failsafe-reports/integration-tests/failsafe-summary.xml + + ${project.build.directory}/failsafe-reports/integration-tests + + + + + + + + + + + + + + com.google.cloud + google-cloud-bigtable-bom + ${bigtable.version} + pom + import + + + + com.google.cloud + google-cloud-bigtable-deps-bom + ${bigtable.version} + pom + import + + + + + + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-2.x + 2.0.0-alpha2-SNAPSHOT + test + + + + ${project.groupId} + bigtable-hbase-2.x + 2.0.0-alpha2-SNAPSHOT + test + + + + org.apache.hbase + hbase-shaded-client + + + + + + + com.google.cloud + google-cloud-bigtable + ${bigtable.version} + test + + + + + org.apache.hbase + hbase-shaded-testing-util + ${hbase2.version} + test + + + + + + com.google.code.findbugs + jsr305 + ${jsr305.version} + test + + + + com.google.guava + guava + 30.1.1-android + test + + + + commons-lang + commons-lang + ${commons-lang.version} + test + + + + io.opencensus + opencensus-impl + 0.28.0 + + + io.opencensus + opencensus-exporter-trace-zipkin + 0.28.0 + + + io.opencensus + opencensus-exporter-stats-prometheus + 0.28.0 + + + io.prometheus + simpleclient_httpserver + 0.3.0 + + + + + junit + junit + ${junit.version} + test + + + + org.junit.platform + junit-platform-launcher + 1.6.2 + test + + + com.google.truth + truth + 1.1.2 + test + + + org.apache.logging.log4j + log4j-api + 2.14.1 + test + + + org.apache.logging.log4j + log4j-core + 2.14.1 + test + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-1.x-integration-tests + 2.0.0-alpha2-SNAPSHOT + test + test-jar + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-1.x + 2.0.0-alpha2-SNAPSHOT + test + test-jar + + + + + org.apache.logging.log4j + log4j-slf4j-impl + 2.14.1 + test + + + + + + org.slf4j + slf4j-api + 1.7.30 + test + + + org.slf4j + slf4j-log4j12 + 1.7.30 + test + + + + + io.dropwizard.metrics + metrics-core + 3.2.6 + test + + + + + + + + + org.apache.maven.plugins + maven-deploy-plugin + 3.0.0-M1 + + true + + + + org.sonatype.plugins + nexus-staging-maven-plugin + + true + + + + org.apache.maven.plugins + maven-site-plugin + + true + + + + org.apache.maven.plugins + maven-source-plugin + + true + + + + org.apache.maven.plugins + maven-javadoc-plugin + + true + + + + org.apache.maven.plugins + maven-gpg-plugin + + true + + + + org.codehaus.mojo + clirr-maven-plugin + + true + + + + + + + + org.apache.maven.plugins + maven-surefire-plugin + + + default-test + test + + test + + + false + + **/*.java + + + + + + + + diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration12x.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration12x.java new file mode 100644 index 0000000000..42c46a504f --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration12x.java @@ -0,0 +1,23 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase; + +/* + * This is a placeholder src/main class which is a workaround to run the maven-jar-plugin. + */ +class MavenPlaceholderIntegration12x { + private MavenPlaceholderIntegration12x() {} +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java new file mode 100644 index 0000000000..d1d9e81661 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java @@ -0,0 +1,93 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring; + +import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.PrometheusStatsCollectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.ZipkinTracingRule; +import org.junit.Assume; +import org.junit.ClassRule; +import org.junit.runner.RunWith; +import org.junit.runners.Suite; + +/** + * Integration tests for 1.x MirroringClient. + * + *

Tests can be run with any combination of Bigtable and HBase as a primary or secondary + * database. They rely solely on the HBase Connection interface and test the behaviour of + * MirroringClient, not any particular underlying implementation. The only exception from that rule + * are tests that test the behaviour of MirroringClient in the presence of database errors (more on + * this later) - those tests can be only run in a specific environment and are skipped when they + * cannot be performed. Tests connect to actual database instances - either a locally hosted HBase, + * in memory MiniCluster (automatically started by ITs when {@code use-hbase-mini-cluster} system + * property is set to true), local Bigtable emulator (remember to set appropriate {@code + * BIGTABLE_EMULATOR_HOST} environment variable for java-bigtable-hbase client) or remote HBase or + * Bigtable clusters. + * + *

ITs create mirroring connection and its components in the same way as the user using this + * library would do it - by reading an XML file with configuration (and might modify or add some + * values to test specific cases). {@code resources} directory contains two configuration files - + * one that configures the MirroringConnection to use HBase as a primary database and Bigtable as a + * secondary, the second file configures it the other way. Path to configuration XML file should be + * provided in {@code integration-tests-config-file-name} system property. + * + *

Cases when one of the databases fails to perform an operation can only be tested when one of + * the tested databases is a in-memory HBase MiniCluster instance that is automatically started by + * ITs. We use custom HBase {@link org.apache.hadoop.hbase.regionserver.Region} implementation + * ({@link + * com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegion}) and + * inject it into the MiniCluster to reject some of operations. {@link Assume} guards tests that + * should be only run under those circumstances. + * + *

ITs can be run using Maven and one of test profiles defined in {@code + * bigtable-hbase-mirroring-client-1.x-integration-tests/pom.xml} file - {@code + * HBaseToBigtableLocalIntegrationTests} and {@code BigtableToHBaseLocalIntegrationTests}. They use + * in-memory HBase MiniCluster and automatically start local Bigtable emulator. {@code mvn compile + * verify -Penable-integration-tests,HBaseToBigtableLocalIntegrationTests} command can be run to run + * ITs using maven. + * + *

Those tests are also used to verify 2.x synchronous {@link + * com.google.cloud.bigtable.mirroring.hbase2_x.MirroringConnection}. Because there are small + * differences between 1.x and 2.x tools used by ITs a compatibility layer have been introduced to + * keep tests consistent and specific implementations of those layers are selected by setting a + * appropriate system property ({@code integrations.compat.table-creator-impl}, {@code + * integrations.compat.failingregion.impl}), their correct values can be found in {@code pom.xml} + * files of appropriate integration test modules. + * + *

Integration tests are integrated with Prometheus for metrics and Zipkin for tracing. Setting + * {@code PROMETHEUS_SERVER_PORT} environment variable will start Prometheus server (configured by + * resources/prometheus.yml). Setting {@code ZIPKIN_API_URL} environment variable ({@code + * host:port}) will enable tracing reporting to Zipkin server (see {@link ZipkinTracingRule} for + * details). + */ +@RunWith(Suite.class) +@Suite.SuiteClasses({ + TestErrorDetection.class, + TestBlocking.class, + TestBufferedMutator.class, + TestMirroringTable.class, + TestReadVerificationSampling.class, +}) +public class IntegrationTests { + // Classes in test suites should use their own ConnectionRule, the one here serves to keep a + // single HBase MiniCluster connection up for all tests (if one is needed). + @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); + @ClassRule public static ZipkinTracingRule zipkinTracingRule = new ZipkinTracingRule(); + + @ClassRule + public static PrometheusStatsCollectionRule prometheusStatsCollectionRule = + new PrometheusStatsCollectionRule(); +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator2x.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator2x.java new file mode 100644 index 0000000000..84b59ea97b --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/compat/TableCreator2x.java @@ -0,0 +1,44 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring.utils.compat; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.stream.Collectors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.client.TableDescriptorBuilder; + +public class TableCreator2x implements TableCreator { + @Override + public void createTable(Connection connection, String tableName, byte[]... columnFamilies) + throws IOException { + Admin admin = connection.getAdmin(); + + TableDescriptor descriptor = + TableDescriptorBuilder.newBuilder(TableName.valueOf(tableName)) + .setColumnFamilies( + Arrays.stream(columnFamilies) + .map((cf) -> ColumnFamilyDescriptorBuilder.newBuilder(cf).build()) + .collect(Collectors.toCollection(ArrayList::new))) + .build(); + admin.createTable(descriptor); + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/org/apache/hadoop/hbase/regionserver/FailingHBaseHRegion2.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/org/apache/hadoop/hbase/regionserver/FailingHBaseHRegion2.java new file mode 100644 index 0000000000..75ffe6fe33 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/java/org/apache/hadoop/hbase/regionserver/FailingHBaseHRegion2.java @@ -0,0 +1,154 @@ +/* + * Copyright 2015 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import static com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegion.batchMutateWithFailures; +import static com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegion.processRowThrow; + +import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegion; +import java.io.IOException; +import java.util.List; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CompareOperator; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.RowMutations; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.filter.ByteArrayComparable; +import org.apache.hadoop.hbase.io.TimeRange; +import org.apache.hadoop.hbase.wal.WAL; + +/** + * {@link FailingHBaseHRegion} but adapted to HBase 2.x implementation of MiniCluster server. + * + *

This class has to reside in org.apache.hadoop.hbase.regionserver package because + * HRegion.RegionScannerImpl returned by {@link FailingHBaseHRegion2#getScanner(Scan, List)} is + * package-private. + */ +public class FailingHBaseHRegion2 extends HRegion { + public FailingHBaseHRegion2( + HRegionFileSystem fs, + WAL wal, + Configuration confParam, + HTableDescriptor htd, + RegionServerServices rsServices) { + super(fs, wal, confParam, htd, rsServices); + } + + public FailingHBaseHRegion2( + Path tableDir, + WAL wal, + FileSystem fs, + Configuration confParam, + HRegionInfo regionInfo, + HTableDescriptor htd, + RegionServerServices rsServices) { + super(tableDir, wal, fs, confParam, regionInfo, htd, rsServices); + } + + public FailingHBaseHRegion2( + Path path, + WAL wal, + FileSystem fs, + Configuration conf, + RegionInfo regionInfo, + TableDescriptor htd, + RegionServerServices rsServices) { + super(path, wal, fs, conf, regionInfo, htd, rsServices); + } + + @Override + public HRegion.RegionScannerImpl getScanner(Scan scan, List additionalScanners) + throws IOException { + // HBase 2.x implements Gets as Scans with start row == end row == requested row. + processRowThrow(scan.getStartRow()); + return super.getScanner(scan, additionalScanners); + } + + @Override + public void mutateRow(RowMutations rm) throws IOException { + processRowThrow(rm.getRow()); + super.mutateRow(rm); + } + + @Override + public OperationStatus[] batchMutate( + Mutation[] mutations, boolean atomic, long nonceGroup, long nonce) throws IOException { + return batchMutateWithFailures( + mutations, (m) -> super.batchMutate(m, atomic, nonceGroup, nonce)); + } + + @Override + public OperationStatus[] batchMutate(Mutation[] mutations, long nonceGroup, long nonce) + throws IOException { + return batchMutateWithFailures(mutations, (m) -> super.batchMutate(m, nonceGroup, nonce)); + } + + @Override + public Result get(Get get) throws IOException { + processRowThrow(get.getRow()); + return super.get(get); + } + + @Override + public Result increment(Increment mutation, long nonceGroup, long nonce) throws IOException { + processRowThrow(mutation.getRow()); + return super.increment(mutation, nonceGroup, nonce); + } + + @Override + public Result append(Append mutation, long nonceGroup, long nonce) throws IOException { + processRowThrow(mutation.getRow()); + return super.append(mutation, nonceGroup, nonce); + } + + @Override + public boolean checkAndMutate( + byte[] row, + byte[] family, + byte[] qualifier, + CompareOperator op, + ByteArrayComparable comparator, + TimeRange timeRange, + Mutation mutation) + throws IOException { + processRowThrow(row); + return super.checkAndMutate(row, family, qualifier, op, comparator, timeRange, mutation); + } + + @Override + public boolean checkAndRowMutate( + byte[] row, + byte[] family, + byte[] qualifier, + CompareOperator op, + ByteArrayComparable comparator, + TimeRange timeRange, + RowMutations rm) + throws IOException { + processRowThrow(row); + return super.checkAndRowMutate(row, family, qualifier, op, comparator, timeRange, rm); + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml new file mode 100644 index 0000000000..6050e7ee40 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml @@ -0,0 +1,56 @@ + + + hbase.client.connection.impl + com.google.cloud.bigtable.mirroring.hbase2_x.MirroringConnection + + + + google.bigtable.mirroring.primary-client.connection.impl + com.google.cloud.bigtable.hbase2_x.BigtableConnection + + + + google.bigtable.mirroring.secondary-client.connection.impl + default + + + + google.bigtable.project.id + fake-project + + + + google.bigtable.instance.id + fake-instance + + + + google.bigtable.use.gcj.client + false + + + + hbase.client.retries.number + 2 + + + + use-hbase-mini-cluster + true + + + + google.bigtable.mirroring.write-error-log.appender.prefix-path + /tmp/write-error-log + + + + google.bigtable.mirroring.write-error-log.appender.max-buffer-size + 8388608 + + + + google.bigtable.mirroring.write-error-log.appender.drop-on-overflow + false + + diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml new file mode 100644 index 0000000000..b57a6087d9 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml @@ -0,0 +1,56 @@ + + + hbase.client.connection.impl + com.google.cloud.bigtable.mirroring.hbase2_x.MirroringConnection + + + + google.bigtable.mirroring.primary-client.connection.impl + default + + + + google.bigtable.mirroring.secondary-client.connection.impl + com.google.cloud.bigtable.hbase2_x.BigtableConnection + + + + google.bigtable.project.id + fake-project + + + + google.bigtable.instance.id + fake-instance + + + + google.bigtable.use.gcj.client + false + + + + hbase.client.retries.number + 2 + + + + use-hbase-mini-cluster + true + + + + google.bigtable.mirroring.write-error-log.appender.prefix-path + /tmp/write-error-log + + + + google.bigtable.mirroring.write-error-log.appender.max-buffer-size + 8388608 + + + + google.bigtable.mirroring.write-error-log.appender.drop-on-overflow + false + + diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/log4j.properties b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/log4j.properties new file mode 100644 index 0000000000..f0dbf50014 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/log4j.properties @@ -0,0 +1,7 @@ +log4j.rootLogger=WARN, console + +log4j.appender.console=org.apache.log4j.ConsoleAppender +log4j.appender.console.Target=System.out +log4j.appender.console.layout=org.apache.log4j.PatternLayout +log4j.appender.console.layout.ConversionPattern=[%-20t] %-5p %-20c{1} - %m%n +log4j.logger.com.google.cloud.bigtable.mirroring=OFF diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/prometheus.yml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/prometheus.yml new file mode 100644 index 0000000000..63c3fc1a27 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-1.x-2.x-integration-tests/src/test/resources/prometheus.yml @@ -0,0 +1,13 @@ +global: + scrape_interval: 5s + + external_labels: + monitor: 'bigtable-mirroring-client-integration-tests' + +scrape_configs: + - job_name: 'bigtable-mirroring-client-integration-tests' + + scrape_interval: 5s + + static_configs: + - targets: ['localhost:8888'] diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/pom.xml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/pom.xml new file mode 100644 index 0000000000..224af1cf82 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/pom.xml @@ -0,0 +1,475 @@ + + + + 4.0.0 + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-2.x-parent + 2.0.0-alpha2-SNAPSHOT + + + bigtable-hbase-mirroring-client-2.x-integration-tests + jar + ${project.groupId}:${project.artifactId} + + This project contains test cases that ought to work for either bigtable-hbase or hbase proper. + + + + ${hbase2.version} + com.google.cloud.bigtable.hbase2_x.BigtableConnection + org.apache.hadoop.hbase.client.BigtableAsyncConnection + 1800 + + + + + HBase2ToBigtableLocalIntegrationTests + + + + ${project.groupId} + bigtable-emulator-maven-plugin + 2.0.0-alpha2-SNAPSHOT + + + + start + stop + + + bigtable.emulator.endpoint + + + + + + + org.apache.maven.plugins + maven-failsafe-plugin + + + integration-tests + + integration-test + verify + + + + **/IntegrationTests.java + + + + hbase-to-bigtable-local-configuration.xml + com.google.cloud.bigtable.hbase.mirroring.utils.compat.TableCreator2x + org.apache.hadoop.hbase.regionserver.FailingHBaseHRegion2 + + + + ${bigtable.emulator.endpoint} + + + true + + + 1 + ${test.timeout} + + + true + + + + ${project.build.directory}/failsafe-reports/integration-tests/failsafe-summary.xml + + ${project.build.directory}/failsafe-reports/integration-tests + + + + + + + + + + BigtableToHBase2LocalIntegrationTests + + + + ${project.groupId} + bigtable-emulator-maven-plugin + 2.0.0-alpha2-SNAPSHOT + + + + start + stop + + + bigtable.emulator.endpoint + + + + + + + org.apache.maven.plugins + maven-failsafe-plugin + + + integration-tests + + integration-test + verify + + + + **/IntegrationTests.java + + + + bigtable-to-hbase-local-configuration.xml + com.google.cloud.bigtable.hbase.mirroring.utils.compat.TableCreator2x + org.apache.hadoop.hbase.regionserver.FailingHBaseHRegion2 + + + + ${bigtable.emulator.endpoint} + + + true + + + 1 + ${test.timeout} + + + true + + + + ${project.build.directory}/failsafe-reports/integration-tests/failsafe-summary.xml + + ${project.build.directory}/failsafe-reports/integration-tests + + + + + + + + + + + + + + com.google.cloud + google-cloud-bigtable-bom + ${bigtable.version} + pom + import + + + + com.google.cloud + google-cloud-bigtable-deps-bom + ${bigtable.version} + pom + import + + + + + + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-2.x + 2.0.0-alpha2-SNAPSHOT + test + + + + ${project.groupId} + bigtable-hbase-2.x + 2.0.0-alpha2-SNAPSHOT + test + + + + org.apache.hbase + hbase-shaded-client + + + + + + + com.google.cloud + google-cloud-bigtable + ${bigtable.version} + test + + + + + org.apache.hbase + hbase-shaded-testing-util + ${hbase2.version} + test + + + + + + com.google.code.findbugs + jsr305 + ${jsr305.version} + test + + + + com.google.guava + guava + 30.1.1-android + test + + + + commons-lang + commons-lang + ${commons-lang.version} + test + + + + io.opencensus + opencensus-impl + 0.28.0 + + + io.opencensus + opencensus-exporter-trace-zipkin + 0.28.0 + + + io.opencensus + opencensus-exporter-stats-prometheus + 0.28.0 + + + io.prometheus + simpleclient_httpserver + 0.3.0 + + + + + junit + junit + ${junit.version} + test + + + + org.junit.platform + junit-platform-launcher + 1.6.2 + test + + + com.google.truth + truth + 1.1.2 + test + + + org.apache.logging.log4j + log4j-api + 2.14.1 + test + + + org.apache.logging.log4j + log4j-core + 2.14.1 + test + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-1.x-integration-tests + 2.0.0-alpha2-SNAPSHOT + test + test-jar + + + org.apache.hbase + * + + + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-1.x-2.x-integration-tests + 2.0.0-alpha2-SNAPSHOT + test-jar + test + + + com.google.cloud.bigtable + bigtable-hbase-mirroring-client-1.x + 2.0.0-alpha2-SNAPSHOT + test-jar + test + + + org.apache.hbase + * + + + + + org.mockito + mockito-core + 3.8.0 + test + + + + + org.apache.logging.log4j + log4j-slf4j-impl + 2.14.1 + test + + + + + + org.slf4j + slf4j-api + 1.7.30 + test + + + org.slf4j + slf4j-log4j12 + 1.7.30 + test + + + + + io.dropwizard.metrics + metrics-core + 3.2.6 + test + + + + + + + + + org.apache.maven.plugins + maven-deploy-plugin + 3.0.0-M1 + + true + + + + org.sonatype.plugins + nexus-staging-maven-plugin + + true + + + + org.apache.maven.plugins + maven-site-plugin + + true + + + + org.apache.maven.plugins + maven-source-plugin + + true + + + + org.apache.maven.plugins + maven-javadoc-plugin + + true + + + + org.apache.maven.plugins + maven-gpg-plugin + + true + + + + org.codehaus.mojo + clirr-maven-plugin + + true + + + + + + + + org.apache.maven.plugins + maven-surefire-plugin + + + default-test + test + + test + + + false + + **/*.java + + + + + + + org.apache.maven.plugins + maven-compiler-plugin + + 8 + 8 + + + + + diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration2x.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration2x.java new file mode 100644 index 0000000000..6ea5aee245 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/main/java/com/google/cloud/bigtable/hbase/MavenPlaceholderIntegration2x.java @@ -0,0 +1,23 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase; + +/* + * This is a placeholder src/main class which is a workaround to run the maven-jar-plugin. + */ +class MavenPlaceholderIntegration2x { + private MavenPlaceholderIntegration2x() {} +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java new file mode 100644 index 0000000000..de53f37941 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/IntegrationTests.java @@ -0,0 +1,33 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring; + +import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; +import org.junit.ClassRule; +import org.junit.runner.RunWith; +import org.junit.runners.Suite; + +@RunWith(Suite.class) +@Suite.SuiteClasses({ + TestMirroringAsyncTable.class, + TestBlocking.class, + TestErrorDetection.class, +}) +public class IntegrationTests { + // Classes in test suites should use their own ConnectionRule, the one here serves to keep a + // single HBase MiniCluster connection up for all tests (if one is needed). + @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java new file mode 100644 index 0000000000..e901f27b2e --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestBlocking.java @@ -0,0 +1,174 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS; +import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.fail; + +import com.google.cloud.bigtable.hbase.mirroring.utils.AsyncConnectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.BlockingFlowControllerStrategy; +import com.google.cloud.bigtable.hbase.mirroring.utils.BlockingMismatchDetector; +import com.google.cloud.bigtable.hbase.mirroring.utils.ConfigurationHelper; +import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.Helpers; +import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; +import com.google.cloud.bigtable.mirroring.hbase2_x.MirroringAsyncConnection; +import com.google.common.util.concurrent.SettableFuture; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.AsyncTable; +import org.apache.hadoop.hbase.client.Get; +import org.junit.Before; +import org.junit.ClassRule; +import org.junit.Rule; +import org.junit.Test; + +public class TestBlocking { + + @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); + + @ClassRule + public static AsyncConnectionRule asyncConnectionRule = new AsyncConnectionRule(connectionRule); + + @Rule + public MismatchDetectorCounterRule mismatchDetectorCounterRule = + new MismatchDetectorCounterRule(); + + private static final byte[] columnFamily1 = "cf1".getBytes(); + private static final byte[] qualifier1 = "q1".getBytes(); + + private TableName tableName; + + @Before + public void setUp() throws IOException { + this.tableName = connectionRule.createTable(columnFamily1); + } + + @Test(timeout = 15000) + public void testConnectionCloseBlocksUntilAllRequestsHaveBeenVerified() + throws InterruptedException, TimeoutException, ExecutionException { + Configuration config = ConfigurationHelper.newConfiguration(); + config.set( + MIRRORING_MISMATCH_DETECTOR_FACTORY_CLASS, + BlockingMismatchDetector.Factory.class.getName()); + BlockingMismatchDetector.reset(); + + final MirroringAsyncConnection connection = asyncConnectionRule.createAsyncConnection(config); + AsyncTable t = connection.getTable(tableName); + final List> getFutures = new ArrayList<>(); + for (int i = 0; i < 10; i++) { + Get get = new Get("1".getBytes()); + get.addColumn(columnFamily1, qualifier1); + getFutures.add(t.get(get)); + } + + final SettableFuture closingThreadStarted = SettableFuture.create(); + final SettableFuture closingThreadEnded = SettableFuture.create(); + + Thread closingThread = + new Thread( + () -> { + try { + closingThreadStarted.set(null); + CompletableFuture.allOf(getFutures.toArray(new CompletableFuture[0])).get(); + connection.close(); + closingThreadEnded.set(null); + } catch (Exception e) { + throw new RuntimeException(e); + } + }); + closingThread.start(); + + // Wait until closing thread starts. + closingThreadStarted.get(1, TimeUnit.SECONDS); + + // And give it some time to run, to verify that is has blocked. + try { + closingThreadEnded.get(1, TimeUnit.SECONDS); + fail("should throw"); + } catch (TimeoutException ignored) { + // expected + } + + // Finish running verifications + BlockingMismatchDetector.unblock(); + + // And now Connection#close() should unblock. + closingThreadEnded.get(1, TimeUnit.SECONDS); + + // And all verification should have finished. + assertThat(TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()) + .isEqualTo(10); + } + + @Test(timeout = 10000) + public void flowControllerBlocksScheduling() + throws IOException, InterruptedException, ExecutionException, TimeoutException { + Configuration config = ConfigurationHelper.newConfiguration(); + config.set( + MIRRORING_FLOW_CONTROLLER_STRATEGY_FACTORY_CLASS, + BlockingFlowControllerStrategy.Factory.class.getName()); + BlockingFlowControllerStrategy.reset(); + + final byte[] row = "1".getBytes(); + final SettableFuture closingThreadStarted = SettableFuture.create(); + final SettableFuture closingThreadEnded = SettableFuture.create(); + + try (MirroringAsyncConnection connection = asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable table = connection.getTable(tableName); + Thread t = + new Thread( + () -> { + closingThreadStarted.set(null); + try { + table + .put(Helpers.createPut(row, columnFamily1, qualifier1, "1".getBytes())) + .get(); + closingThreadEnded.set(null); + } catch (InterruptedException | ExecutionException e) { + closingThreadEnded.setException(e); + throw new RuntimeException(e); + } + }); + t.start(); + + // Wait until thread starts. + closingThreadStarted.get(1, TimeUnit.SECONDS); + + // Give it some time to run, to verify that is has blocked. + try { + closingThreadEnded.get(1, TimeUnit.SECONDS); + fail("should throw"); + } catch (TimeoutException ignored) { + // expected + } + // Unlock flow controller. + BlockingFlowControllerStrategy.unblock(); + // And verify that it has unblocked. + closingThreadEnded.get(1, TimeUnit.SECONDS); + } + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java new file mode 100644 index 0000000000..8d4f95d507 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestErrorDetection.java @@ -0,0 +1,307 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper.MIRRORING_READ_VERIFICATION_RATE_PERCENT; +import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +import com.google.cloud.bigtable.hbase.mirroring.utils.AsyncConnectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.ConfigurationHelper; +import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers; +import com.google.cloud.bigtable.hbase.mirroring.utils.Helpers; +import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.PropagatingThread; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; +import com.google.cloud.bigtable.mirroring.hbase2_x.MirroringAsyncConnection; +import com.google.common.primitives.Longs; +import java.io.IOException; +import java.nio.charset.Charset; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeoutException; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.CompareOperator; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.AdvancedScanResultConsumer; +import org.apache.hadoop.hbase.client.AsyncConnection; +import org.apache.hadoop.hbase.client.AsyncTable; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Table; +import org.junit.Assume; +import org.junit.ClassRule; +import org.junit.Ignore; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +@RunWith(JUnit4.class) +public class TestErrorDetection { + static final byte[] columnFamily1 = "cf1".getBytes(); + static final byte[] qualifier1 = "q1".getBytes(); + @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); + + @ClassRule + public static AsyncConnectionRule asyncConnectionRule = new AsyncConnectionRule(connectionRule); + + @Rule public ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + + @Rule + public MismatchDetectorCounterRule mismatchDetectorCounterRule = + new MismatchDetectorCounterRule(); + + public DatabaseHelpers databaseHelpers = new DatabaseHelpers(connectionRule, executorServiceRule); + + public static Configuration config = ConfigurationHelper.newConfiguration(); + + static { + config.set(MIRRORING_READ_VERIFICATION_RATE_PERCENT, "100"); + } + + @Test + public void readsAndWritesArePerformed() + throws IOException, ExecutionException, InterruptedException { + final TableName tableName = connectionRule.createTable(columnFamily1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + asyncConnection + .getTable(tableName) + .put(Helpers.createPut("1".getBytes(), columnFamily1, qualifier1, "1".getBytes())) + .get(); + } + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + Result result = + asyncConnection + .getTable(tableName) + .get(Helpers.createGet("1".getBytes(), columnFamily1, qualifier1)) + .get(); + assertArrayEquals(result.getRow(), "1".getBytes()); + assertArrayEquals(result.getValue(columnFamily1, qualifier1), "1".getBytes()); + assertEquals(TestMismatchDetectorCounter.getInstance().getErrorCount(), 0); + } + } + + @Test + public void mismatchIsDetected() throws IOException, InterruptedException, ExecutionException { + final TableName tableName = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + asyncConnection + .getPrimaryConnection() + .getTable(tableName) + .put(Helpers.createPut("1".getBytes(), columnFamily1, qualifier1, "1".getBytes())) + .get(); + } + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + asyncConnection + .getSecondaryConnection() + .getTable(tableName) + .put(Helpers.createPut("1".getBytes(), columnFamily1, qualifier1, "2".getBytes())) + .get(); + } + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + Result result = + asyncConnection + .getTable(tableName) + .get(Helpers.createGet("1".getBytes(), columnFamily1, qualifier1)) + .get(); + // Data from primary is returned. + assertArrayEquals(result.getRow(), "1".getBytes()); + assertArrayEquals(result.getValue(columnFamily1, qualifier1), "1".getBytes()); + } + + assertEquals(1, TestMismatchDetectorCounter.getInstance().getErrorCount()); + } + + @Ignore("Fails for unknown reasons") + @Test + public void concurrentInsertionAndReadingInsertsWithScanner() + throws IOException, InterruptedException, TimeoutException { + + class WorkerThread extends PropagatingThread { + private final long workerId; + private final long batchSize = 100; + private final AsyncConnection connection; + private final TableName tableName; + private final long entriesPerWorker; + private final long numberOfBatches; + + public WorkerThread( + int workerId, AsyncConnection connection, TableName tableName, long numberOfBatches) { + this.workerId = workerId; + this.connection = connection; + this.entriesPerWorker = numberOfBatches * batchSize; + this.numberOfBatches = numberOfBatches; + this.tableName = tableName; + } + + @Override + public void performTask() throws Throwable { + AsyncTable table = this.connection.getTable(tableName); + for (long batchId = 0; batchId < this.numberOfBatches; batchId++) { + List puts = new ArrayList<>(); + for (long batchEntryId = 0; batchEntryId < this.batchSize; batchEntryId++) { + long putIndex = + this.workerId * this.entriesPerWorker + batchId * this.batchSize + batchEntryId; + long putTimestamp = putIndex + 1; + byte[] putIndexBytes = Longs.toByteArray(putIndex); + byte[] putValueBytes = ("value-" + putIndex).getBytes(); + puts.add( + Helpers.createPut( + putIndexBytes, columnFamily1, qualifier1, putTimestamp, putValueBytes)); + } + CompletableFuture.allOf(table.put(puts).toArray(new CompletableFuture[0])).get(); + } + } + } + + final int numberOfWorkers = 100; + final int numberOfBatches = 100; + + final TableName tableName = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection connection = asyncConnectionRule.createAsyncConnection(config)) { + List workers = new ArrayList<>(); + for (int i = 0; i < numberOfWorkers; i++) { + PropagatingThread worker = new WorkerThread(i, connection, tableName, numberOfBatches); + worker.start(); + workers.add(worker); + } + + for (PropagatingThread worker : workers) { + worker.propagatingJoin(60000); + } + } + + try (MirroringAsyncConnection connection = asyncConnectionRule.createAsyncConnection(config)) { + try (ResultScanner s = connection.getTable(tableName).getScanner(columnFamily1, qualifier1)) { + long counter = 0; + for (Result r : s) { + long row = Longs.fromByteArray(r.getRow()); + byte[] value = r.getValue(columnFamily1, qualifier1); + assertEquals(counter, row); + assertEquals(("value-" + counter).getBytes(), value); + counter += 1; + } + } + } + + assertEquals(0, TestMismatchDetectorCounter.getInstance().getErrorCount()); + } + + @Test + public void conditionalMutationsPreserveConsistency() throws IOException, TimeoutException { + // TODO(mwalkiewicz): fix BigtableToHBase2 + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + final int numberOfOperations = 50; + final int numberOfWorkers = 100; + + final byte[] canary = "canary-value".getBytes(); + + class WorkerThread extends PropagatingThread { + private final long workerId; + private final AsyncConnection connection; + private final TableName tableName; + + public WorkerThread(int workerId, AsyncConnection connection, TableName tableName) { + this.workerId = workerId; + this.connection = connection; + this.tableName = tableName; + } + + @Override + public void performTask() throws Throwable { + AsyncTable table = this.connection.getTable(tableName); + byte[] row = String.format("r%s", workerId).getBytes(); + table.put(Helpers.createPut(row, columnFamily1, qualifier1, 0, "0".getBytes())); + for (int i = 0; i < numberOfOperations; i++) { + byte[] currentValue = String.valueOf(i).getBytes(); + byte[] nextValue = String.valueOf(i + 1).getBytes(); + assertFalse( + table + .checkAndMutate(row, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.NOT_EQUAL, currentValue) + .thenPut(Helpers.createPut(row, columnFamily1, qualifier1, i, canary)) + .get()); + assertTrue( + table + .checkAndMutate(row, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.EQUAL, currentValue) + .thenPut(Helpers.createPut(row, columnFamily1, qualifier1, i, nextValue)) + .get()); + } + } + } + + final TableName tableName = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection connection = asyncConnectionRule.createAsyncConnection(config)) { + List workers = new ArrayList<>(); + for (int i = 0; i < numberOfWorkers; i++) { + PropagatingThread worker = new WorkerThread(i, connection, tableName); + worker.start(); + workers.add(worker); + } + + for (PropagatingThread worker : workers) { + worker.propagatingJoin(30000); + } + } + + try (MirroringConnection connection = databaseHelpers.createConnection()) { + try (Table t = connection.getTable(tableName)) { + try (ResultScanner s = t.getScanner(columnFamily1, qualifier1)) { + int counter = 0; + for (Result r : s) { + assertEquals( + new String(r.getRow(), Charset.defaultCharset()), + String.valueOf(numberOfOperations), + new String(r.getValue(columnFamily1, qualifier1), Charset.defaultCharset())); + counter++; + } + assertEquals(numberOfWorkers, counter); + } + } + } + + assertEquals( + numberOfWorkers + 1, // because null returned from the scanner is also verified. + TestMismatchDetectorCounter.getInstance().getVerificationsStartedCounter()); + assertEquals( + numberOfWorkers + 1, + TestMismatchDetectorCounter.getInstance().getVerificationsFinishedCounter()); + assertEquals(0, TestMismatchDetectorCounter.getInstance().getErrorCount()); + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringAsyncTable.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringAsyncTable.java new file mode 100644 index 0000000000..c430e855c8 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/TestMirroringAsyncTable.java @@ -0,0 +1,1421 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring; + +import static com.google.common.truth.Truth.assertThat; + +import com.google.cloud.bigtable.hbase.mirroring.utils.AsyncConnectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.ConfigurationHelper; +import com.google.cloud.bigtable.hbase.mirroring.utils.ConnectionRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.DatabaseHelpers; +import com.google.cloud.bigtable.hbase.mirroring.utils.Helpers; +import com.google.cloud.bigtable.hbase.mirroring.utils.MismatchDetectorCounterRule; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestMismatchDetectorCounter; +import com.google.cloud.bigtable.hbase.mirroring.utils.TestWriteErrorConsumer; +import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegion; +import com.google.cloud.bigtable.hbase.mirroring.utils.failinghbaseminicluster.FailingHBaseHRegionRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; +import com.google.cloud.bigtable.mirroring.hbase2_x.MirroringAsyncConnection; +import com.google.common.base.Predicate; +import com.google.common.collect.Iterators; +import com.google.common.primitives.Longs; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.stream.Collectors; +import java.util.stream.IntStream; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.CompareOperator; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.AdvancedScanResultConsumer; +import org.apache.hadoop.hbase.client.AsyncTable; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.ScanResultConsumer; +import org.junit.Assume; +import org.junit.ClassRule; +import org.junit.Ignore; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; + +@RunWith(JUnit4.class) +public class TestMirroringAsyncTable { + @ClassRule public static ConnectionRule connectionRule = new ConnectionRule(); + + @ClassRule + public static AsyncConnectionRule asyncConnectionRule = new AsyncConnectionRule(connectionRule); + + @Rule public ExecutorServiceRule executorServiceRule = ExecutorServiceRule.cachedPoolExecutor(); + + @Rule public FailingHBaseHRegionRule failingHBaseHRegionRule = new FailingHBaseHRegionRule(); + + @Rule + public MismatchDetectorCounterRule mismatchDetectorCounterRule = + new MismatchDetectorCounterRule(); + + final Predicate failPredicate = + (bytes) -> bytes.length == 8 && Longs.fromByteArray(bytes) % 2 == 0; + + public DatabaseHelpers databaseHelpers = new DatabaseHelpers(connectionRule, executorServiceRule); + + public static final Configuration config = ConfigurationHelper.newConfiguration(); + + static final byte[] columnFamily1 = "cf1".getBytes(); + static final byte[] qualifier1 = "cq1".getBytes(); + static final byte[] qualifier2 = "cq2".getBytes(); + static final byte[] qualifier3 = "cq3".getBytes(); + static final byte[] qualifier4 = "cq4".getBytes(); + static final byte[] qualifier5 = "cq5".getBytes(); + + public static byte[] rowKeyFromId(int id) { + return Longs.toByteArray(id); + } + + @Test + public void testPut() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> putFutures = + IntStream.range(0, databaseEntriesCount) + .mapToObj(i -> t.put(Helpers.createPut(i, columnFamily1, qualifier1))) + .collect(Collectors.toList()); + + CompletableFuture.allOf(putFutures.toArray(new CompletableFuture[0])).get(); + } + databaseHelpers.verifyTableConsistency(tableName1); + + final TableName tableName2 = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List>> putBatches = new ArrayList<>(); + int id = 0; + for (int i = 0; i < 10; i++) { + List puts = new ArrayList<>(); + for (int j = 0; j < 100; j++) { + puts.add(Helpers.createPut(id, columnFamily1, qualifier1)); + id++; + } + putBatches.add(t.put(puts)); + } + CompletableFuture.allOf( + putBatches.stream() + .flatMap(List::stream) + .collect(Collectors.toList()) + .toArray(new CompletableFuture[0])) + .get(); + } + databaseHelpers.verifyTableConsistency(tableName2); + } + + @Test + public void testPutWithPrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> putFutures = + IntStream.range(0, databaseEntriesCount) + .mapToObj(i -> t.put(Helpers.createPut(i, columnFamily1, qualifier1))) + .collect(Collectors.toList()); + CompletableFuture.allOf(putFutures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < putFutures.size(); i++) { + checkIfShouldHaveThrown(putFutures.get(i), rowKeyFromId(i)); + } + } + databaseHelpers.verifyTableConsistency(tableName1); + + final TableName tableName2 = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List>> putBatches = new ArrayList<>(); + int id = 0; + for (int i = 0; i < 100; i++) { + List puts = new ArrayList<>(); + for (int j = 0; j < 100; j++) { + puts.add(Helpers.createPut(id, columnFamily1, qualifier1)); + id++; + } + putBatches.add(t.put(puts)); + } + List> flatFutures = + putBatches.stream().flatMap(List::stream).collect(Collectors.toList()); + CompletableFuture.allOf(flatFutures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < flatFutures.size(); i++) { + checkIfShouldHaveThrown(flatFutures.get(i), rowKeyFromId(i)); + } + } + databaseHelpers.verifyTableConsistency(tableName2); + } + + @Test + public void testPutWithSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> putFutures = + IntStream.range(0, databaseEntriesCount) + .mapToObj(i -> t.put(Helpers.createPut(i, columnFamily1, qualifier1))) + .collect(Collectors.toList()); + + CompletableFuture.allOf(putFutures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + } + databaseHelpers.verifyTableConsistency(tableName1, failPredicate); + + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext2 = + new TestMirroringTable.ReportedErrorsContext(); + final TableName tableName2 = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List>> putBatches = new ArrayList<>(); + int id = 0; + for (int i = 0; i < 10; i++) { + List puts = new ArrayList<>(); + for (int j = 0; j < 100; j++) { + puts.add(Helpers.createPut(id, columnFamily1, qualifier1)); + id++; + } + putBatches.add(t.put(puts)); + } + CompletableFuture.allOf( + putBatches.stream() + .flatMap(List::stream) + .collect(Collectors.toList()) + .toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + } + + databaseHelpers.verifyTableConsistency(tableName2, failPredicate); + reportedErrorsContext2.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testDelete() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> deleteFutures = + IntStream.range(0, databaseEntriesCount) + .mapToObj( + i -> t.delete(Helpers.createDelete(rowKeyFromId(i), columnFamily1, qualifier1))) + .collect(Collectors.toList()); + + CompletableFuture.allOf(deleteFutures.toArray(new CompletableFuture[0])).get(); + } + databaseHelpers.verifyTableConsistency(tableName1); + + final TableName tableName2 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName2, databaseEntriesCount, columnFamily1, qualifier1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List>> deleteBatches = new ArrayList<>(); + int id = 0; + for (int i = 0; i < 10; i++) { + List deletes = new ArrayList<>(); + for (int j = 0; j < 100; j++) { + deletes.add(Helpers.createDelete(rowKeyFromId(id), columnFamily1, qualifier1)); + id++; + } + deleteBatches.add(t.delete(deletes)); + } + CompletableFuture.allOf( + deleteBatches.stream() + .flatMap(List::stream) + .collect(Collectors.toList()) + .toArray(new CompletableFuture[0])) + .get(); + } + databaseHelpers.verifyTableConsistency(tableName2); + } + + @Test + public void testDeleteWithPrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + // Fill tables before forcing operations to fail. + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + final TableName tableName2 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName2, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> deleteFutures = + IntStream.range(0, databaseEntriesCount) + .mapToObj( + i -> t.delete(Helpers.createDelete(rowKeyFromId(i), columnFamily1, qualifier1))) + .collect(Collectors.toList()); + + CompletableFuture.allOf(deleteFutures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < deleteFutures.size(); i++) { + checkIfShouldHaveThrown(deleteFutures.get(i), rowKeyFromId(i)); + } + } + databaseHelpers.verifyTableConsistency(tableName1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List>> deleteBatches = new ArrayList<>(); + int id = 0; + for (int i = 0; i < databaseEntriesCount / 100; i++) { + List deletes = new ArrayList<>(); + for (int j = 0; j < 100; j++) { + deletes.add(Helpers.createDelete(rowKeyFromId(id), columnFamily1, qualifier1)); + id++; + } + deleteBatches.add(t.delete(deletes)); + } + List> flatFutures = + deleteBatches.stream().flatMap(List::stream).collect(Collectors.toList()); + CompletableFuture.allOf(flatFutures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < flatFutures.size(); i++) { + checkIfShouldHaveThrown(flatFutures.get(i), rowKeyFromId(i)); + } + + assertThat(flatFutures.stream().filter(CompletableFuture::isCompletedExceptionally).count()) + .isEqualTo(flatFutures.stream().filter(f -> !f.isCompletedExceptionally()).count()); + } + databaseHelpers.verifyTableConsistency(tableName2); + } + + @Test + public void testDeleteWithSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + // Fill tables before forcing operations to fail. + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + final TableName tableName2 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName2, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.BAD_FAMILY, "failed"); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> deleteFutures = + IntStream.range(0, databaseEntriesCount) + .mapToObj( + i -> t.delete(Helpers.createDelete(rowKeyFromId(i), columnFamily1, qualifier1))) + .collect(Collectors.toList()); + + CompletableFuture.allOf(deleteFutures.toArray(new CompletableFuture[0])).get(); + } + assertThat(databaseHelpers.countRows(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(0); + assertThat(databaseHelpers.countRows(tableName1, DatabaseHelpers.DatabaseSelector.SECONDARY)) + .isEqualTo(databaseEntriesCount / 2); + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext2 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName2); + + List>> deleteBatches = new ArrayList<>(); + int id = 0; + for (int i = 0; i < 10; i++) { + List deletes = new ArrayList<>(); + for (int j = 0; j < 100; j++) { + deletes.add(Helpers.createDelete(rowKeyFromId(id), columnFamily1, qualifier1)); + id++; + } + deleteBatches.add(t.delete(deletes)); + } + CompletableFuture.allOf( + deleteBatches.stream() + .flatMap(List::stream) + .collect(Collectors.toList()) + .toArray(new CompletableFuture[0])) + .get(); + } + assertThat(databaseHelpers.countRows(tableName2, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(0); + assertThat(databaseHelpers.countRows(tableName2, DatabaseHelpers.DatabaseSelector.SECONDARY)) + .isEqualTo(databaseEntriesCount / 2); + reportedErrorsContext2.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testCheckAndPut() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifEquals(Longs.toByteArray(i)) + .thenPut(Helpers.createPut(i, columnFamily1, qualifier2))); + futures.add( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.EQUAL, Longs.toByteArray(i)) + .thenPut(Helpers.createPut(i, columnFamily1, qualifier3))); + futures.add( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.GREATER, Longs.toByteArray(i + 1)) + .thenPut(Helpers.createPut(i, columnFamily1, qualifier4))); + futures.add( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.NOT_EQUAL, Longs.toByteArray(i)) + .thenPut(Helpers.createPut(i, columnFamily1, qualifier5))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + } + assertThat(databaseHelpers.countRows(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount); + assertThat(databaseHelpers.countCells(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount * 4); + databaseHelpers.verifyTableConsistency(tableName1); + } + + @Test + public void testCheckAndPutPrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + + for (int i = 0; i < databaseEntriesCount; i++) { + final byte[] rowKeyAndValue = rowKeyFromId(i); + futures.add( + t.checkAndMutate(rowKeyAndValue, columnFamily1) + .qualifier(qualifier1) + .ifEquals(rowKeyAndValue) + .thenPut(Helpers.createPut(i, columnFamily1, qualifier2))); + } + + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < futures.size(); i++) { + checkIfShouldHaveThrown(futures.get(i), rowKeyFromId(i)); + } + } + assertThat(databaseHelpers.countRows(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount); + assertThat(databaseHelpers.countCells(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo((int) (databaseEntriesCount * 1.5)); + databaseHelpers.verifyTableConsistency(tableName1); + } + + @Test + public void testCheckAndPutSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKeyAndValue = rowKeyFromId(i); + futures.add( + t.checkAndMutate(rowKeyAndValue, columnFamily1) + .qualifier(qualifier1) + .ifEquals(rowKeyAndValue) + .thenPut(Helpers.createPut(i, columnFamily1, qualifier2))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + + for (CompletableFuture fut : futures) { + assertThat(fut.getNow(false)).isEqualTo(true); + } + } + + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.PRIMARY, + Helpers.createScan(columnFamily1, qualifier2))) + .isEqualTo(databaseEntriesCount); + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.SECONDARY, + Helpers.createScan(columnFamily1, qualifier2))) + .isEqualTo(databaseEntriesCount / 2); + assertThat(databaseHelpers.countCells(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount * 2); + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testCheckAndDelete() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable( + tableName1, + databaseEntriesCount, + columnFamily1, + qualifier1, + qualifier2, + qualifier3, + qualifier4, + qualifier5); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + assertThat( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifEquals(Longs.toByteArray(i)) + .thenDelete( + Helpers.createDelete(Longs.toByteArray(i), columnFamily1, qualifier2)) + .get()) + .isTrue(); + assertThat( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.EQUAL, Longs.toByteArray(i)) + .thenDelete( + Helpers.createDelete(Longs.toByteArray(i), columnFamily1, qualifier3)) + .get()) + .isTrue(); + assertThat( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.GREATER, Longs.toByteArray(i + 1)) + .thenDelete( + Helpers.createDelete(Longs.toByteArray(i), columnFamily1, qualifier4)) + .get()) + .isTrue(); + assertThat( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.NOT_EQUAL, Longs.toByteArray(i)) + .thenDelete( + Helpers.createDelete(Longs.toByteArray(i), columnFamily1, qualifier5)) + .get()) + .isFalse(); + } + } + + assertThat(databaseHelpers.countRows(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount); + + assertThat(databaseHelpers.countCells(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount * 2); + + databaseHelpers.verifyTableConsistency(tableName1); + } + + @Test + public void testCheckAndDeletePrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable( + tableName1, databaseEntriesCount, columnFamily1, qualifier1, qualifier2); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + final byte[] rowKeyAndValue = rowKeyFromId(i); + futures.add( + t.checkAndMutate(rowKeyAndValue, columnFamily1) + .qualifier(qualifier1) + .ifEquals(rowKeyAndValue) + .thenDelete(Helpers.createDelete(rowKeyAndValue, columnFamily1, qualifier2))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < futures.size(); i++) { + checkIfShouldHaveThrown(futures.get(i), rowKeyFromId(i)); + } + } + + assertThat(databaseHelpers.countRows(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount); + + assertThat(databaseHelpers.countCells(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo((int) (databaseEntriesCount * 1.5)); + + databaseHelpers.verifyTableConsistency(tableName1); + } + + @Test + public void testCheckAndDeleteSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKeyAndValue = rowKeyFromId(i); + futures.add( + t.checkAndMutate(rowKeyAndValue, columnFamily1) + .qualifier(qualifier1) + .ifEquals(rowKeyAndValue) + .thenDelete(Helpers.createDelete(rowKeyAndValue, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + for (CompletableFuture fut : futures) { + assertThat(fut.getNow(false)).isEqualTo(true); + } + } + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.PRIMARY, + Helpers.createScan(columnFamily1, qualifier1))) + .isEqualTo(0); + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.SECONDARY, + Helpers.createScan(columnFamily1, qualifier1))) + .isEqualTo(databaseEntriesCount / 2); + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testCheckAndMutate() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + assertThat( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.EQUAL, Longs.toByteArray(i)) + .thenMutate( + Helpers.createRowMutations( + rowKey, + Helpers.createPut(i, columnFamily1, qualifier2), + Helpers.createDelete(rowKey, columnFamily1, qualifier1))) + .get()) + .isTrue(); + assertThat( + t.checkAndMutate(rowKey, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.EQUAL, Longs.toByteArray(i)) + .thenMutate( + Helpers.createRowMutations( + rowKey, Helpers.createDelete(rowKey, columnFamily1, qualifier2))) + .get()) + .isFalse(); + } + } + + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.PRIMARY, + Helpers.createScan(columnFamily1, qualifier1))) + .isEqualTo(0); + + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.PRIMARY, + Helpers.createScan(columnFamily1, qualifier2))) + .isEqualTo(databaseEntriesCount); + + databaseHelpers.verifyTableConsistency(tableName1); + } + + @Test + public void testCheckAndMutatePrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable( + tableName1, databaseEntriesCount, columnFamily1, qualifier1, qualifier2); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + final byte[] rowKeyAndValue = rowKeyFromId(i); + futures.add( + t.checkAndMutate(rowKeyAndValue, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.EQUAL, rowKeyAndValue) + .thenMutate( + Helpers.createRowMutations( + rowKeyAndValue, + Helpers.createDelete(rowKeyAndValue, columnFamily1, qualifier2)))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < futures.size(); i++) { + checkIfShouldHaveThrown(futures.get(i), rowKeyFromId(i)); + } + } + + assertThat(databaseHelpers.countRows(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo(databaseEntriesCount); + + assertThat(databaseHelpers.countCells(tableName1, DatabaseHelpers.DatabaseSelector.PRIMARY)) + .isEqualTo((int) (databaseEntriesCount * 1.5)); + + databaseHelpers.verifyTableConsistency(tableName1); + } + + // TODO(mwalkiewicz): fix + @Ignore("Fails for unknown reasons") + @Test + public void testCheckAndMutateSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKeyAndValue = rowKeyFromId(i); + futures.add( + t.checkAndMutate(rowKeyAndValue, columnFamily1) + .qualifier(qualifier1) + .ifMatches(CompareOperator.EQUAL, rowKeyAndValue) + .thenMutate( + Helpers.createRowMutations( + rowKeyAndValue, + Helpers.createDelete(rowKeyAndValue, columnFamily1, qualifier1)))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + + for (CompletableFuture fut : futures) { + assertThat(fut.getNow(false)).isEqualTo(true); + } + } + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.PRIMARY, + Helpers.createScan(columnFamily1, qualifier1))) + .isEqualTo(0); + assertThat( + databaseHelpers.countRows( + tableName1, + DatabaseHelpers.DatabaseSelector.SECONDARY, + Helpers.createScan(columnFamily1, qualifier1))) + .isEqualTo(databaseEntriesCount / 2); + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testIncrement() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.increment(Helpers.createIncrement(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + } + + databaseHelpers.verifyTableConsistency(tableName1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + Result r = t.get(Helpers.createGet(rowKey, columnFamily1, qualifier1)).get(); + assertThat(Longs.fromByteArray(r.getValue(columnFamily1, qualifier1))).isEqualTo(i + 1); + } + } + } + + @Test + public void testIncrementPrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.increment(Helpers.createIncrement(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < futures.size(); i++) { + checkIfShouldHaveThrown(futures.get(i), rowKeyFromId(i)); + } + } + + databaseHelpers.verifyTableConsistency(tableName1); + } + + @Test + public void testIncrementSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + TestWriteErrorConsumer.clearErrors(); + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.increment(Helpers.createIncrement(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + } + + assertThat(TestWriteErrorConsumer.getErrorCount()).isEqualTo(databaseEntriesCount / 2); + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testAppend() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add( + t.append(Helpers.createAppend(rowKey, columnFamily1, qualifier1, new byte[] {1}))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + } + + databaseHelpers.verifyTableConsistency(tableName1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + Result r = t.get(Helpers.createGet(rowKey, columnFamily1, qualifier1)).get(); + byte[] expectedValue = new byte[] {0, 0, 0, 0, 0, 0, 0, 0, 1}; + System.arraycopy(rowKey, 0, expectedValue, 0, 8); + assertThat(r.getValue(columnFamily1, qualifier1)).isEqualTo(expectedValue); + } + } + } + + @Test + public void testAppendPrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add( + t.append(Helpers.createAppend(rowKey, columnFamily1, qualifier1, new byte[] {1}))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + for (int i = 0; i < futures.size(); i++) { + checkIfShouldHaveThrown(futures.get(i), rowKeyFromId(i)); + } + } + + databaseHelpers.verifyTableConsistency(tableName1); + } + + @Test + public void testAppendSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + TestWriteErrorConsumer.clearErrors(); + + FailingHBaseHRegion.failMutation( + failPredicate, HConstants.OperationStatusCode.SANITY_CHECK_FAILURE, "failed"); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add( + t.append(Helpers.createAppend(rowKey, columnFamily1, qualifier1, new byte[] {1}))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + } + + assertThat(TestWriteErrorConsumer.getErrorCount()).isEqualTo(databaseEntriesCount / 2); + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testGet() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.get(Helpers.createGet(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + } + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + } + + @Test + public void testGetWithPrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.get(Helpers.createGet(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < futures.size(); i++) { + checkIfShouldHaveThrown(futures.get(i), rowKeyFromId(i)); + } + } + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + } + + @Test + public void testGetWithSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.get(Helpers.createGet(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + } + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()) + .isEqualTo(databaseEntriesCount / 2); + assertThat(TestMismatchDetectorCounter.getInstance().getFailureCount()) + .isEqualTo(databaseEntriesCount / 2); + assertThat(TestMismatchDetectorCounter.getInstance().getMismatchCount()).isEqualTo(0); + } + + @Test + public void testExists() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.exists(Helpers.createGet(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + assertThat(futures.stream().allMatch(fut -> fut.getNow(false))).isTrue(); + } + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + } + + @Test + public void testExistsWithPrimaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.exists(Helpers.createGet(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < futures.size(); i++) { + checkIfShouldHaveThrown(futures.get(i), rowKeyFromId(i)); + } + } + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()).isEqualTo(0); + } + + @Test + public void testExistsWithSecondaryErrors() + throws IOException, ExecutionException, InterruptedException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + final TableName tableName1 = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName1, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName1); + List> futures = new ArrayList<>(); + for (int i = 0; i < databaseEntriesCount; i++) { + byte[] rowKey = rowKeyFromId(i); + futures.add(t.exists(Helpers.createGet(rowKey, columnFamily1, qualifier1))); + } + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).get(); + assertThat(futures.stream().allMatch(fut -> fut.getNow(false))).isTrue(); + } + assertThat(TestMismatchDetectorCounter.getInstance().getErrorCount()) + .isEqualTo(databaseEntriesCount / 2); + assertThat(TestMismatchDetectorCounter.getInstance().getFailureCount()) + .isEqualTo(databaseEntriesCount / 2); + assertThat(TestMismatchDetectorCounter.getInstance().getMismatchCount()).isEqualTo(0); + } + + @Test + public void testBatch() throws IOException, InterruptedException, ExecutionException { + int databaseEntriesCount = 1000; + + final TableName tableName = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName); + + List>> batches = new ArrayList<>(); + int id = 0; + while (id < databaseEntriesCount) { + List batch = new ArrayList<>(); + for (int j = 0; j < 100 && id < databaseEntriesCount; j++) { + batch.add(Helpers.createPut(id, columnFamily1, qualifier1)); + id++; + } + batches.add(t.batch(batch)); + } + List> flatResults = + batches.stream().flatMap(List::stream).collect(Collectors.toList()); + CompletableFuture.allOf(flatResults.toArray(new CompletableFuture[0])).get(); + } + databaseHelpers.verifyTableConsistency(tableName); + } + + @Test + public void testBatchWithPrimaryErrors() + throws IOException, InterruptedException, ExecutionException { + Assume.assumeTrue( + ConfigurationHelper.isPrimaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + final TableName tableName = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName); + + List>> batches = new ArrayList<>(); + int id = 0; + while (id < databaseEntriesCount) { + List batch = new ArrayList<>(); + for (int j = 0; j < 100 && id < databaseEntriesCount; j++) { + batch.add(Helpers.createPut(id, columnFamily1, qualifier1)); + id++; + } + batches.add(t.batch(batch)); + } + List> flatResults = + batches.stream().flatMap(List::stream).collect(Collectors.toList()); + CompletableFuture.allOf(flatResults.toArray(new CompletableFuture[0])) + .exceptionally(e -> null) + .get(); + + for (int i = 0; i < flatResults.size(); i++) { + checkIfShouldHaveThrown(flatResults.get(i), rowKeyFromId(i)); + } + } + databaseHelpers.verifyTableConsistency(tableName); + } + + @Test + public void testBatchWithSecondaryErrors() + throws IOException, InterruptedException, ExecutionException { + Assume.assumeTrue( + ConfigurationHelper.isSecondaryHBase() && ConfigurationHelper.isUsingHBaseMiniCluster()); + + int databaseEntriesCount = 1000; + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + TestMirroringTable.ReportedErrorsContext reportedErrorsContext1 = + new TestMirroringTable.ReportedErrorsContext(); + final TableName tableName = connectionRule.createTable(columnFamily1); + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable t = asyncConnection.getTable(tableName); + + List>> batches = new ArrayList<>(); + int id = 0; + while (id < databaseEntriesCount) { + List batch = new ArrayList<>(); + for (int j = 0; j < 100 && id < databaseEntriesCount; j++) { + batch.add(Helpers.createPut(id, columnFamily1, qualifier1)); + id++; + } + batches.add(t.batch(batch)); + } + List> flatResults = + batches.stream().flatMap(List::stream).collect(Collectors.toList()); + CompletableFuture.allOf(flatResults.toArray(new CompletableFuture[0])).get(); + } + databaseHelpers.verifyTableConsistency(tableName, failPredicate); + reportedErrorsContext1.assertNewErrorsReported(databaseEntriesCount / 2); + } + + @Test + public void testResultScanner() throws IOException { + int databaseEntriesCount = 1000; + + TableName tableName = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName, databaseEntriesCount, columnFamily1, qualifier1); + + FailingHBaseHRegion.failMutation(failPredicate, "failed"); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + AsyncTable table = asyncConnection.getTable(tableName); + + try (ResultScanner scanner = table.getScanner(columnFamily1)) { + assertThat(Iterators.size(scanner.iterator())).isEqualTo(databaseEntriesCount); + } + } + } + + @Test + public void testScanAll() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + TableName tableName = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName, databaseEntriesCount, columnFamily1, qualifier1); + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + assertThat(asyncConnection.getTable(tableName).scanAll(new Scan()).get().size()) + .isEqualTo(databaseEntriesCount); + } + } + + @Test + public void testBasicScan() throws IOException, ExecutionException, InterruptedException { + int databaseEntriesCount = 1000; + + TableName tableName = connectionRule.createTable(columnFamily1); + databaseHelpers.fillTable(tableName, databaseEntriesCount, columnFamily1, qualifier1); + + AtomicInteger read = new AtomicInteger(0); + CompletableFuture scanConsumerEnded = new CompletableFuture<>(); + + ScanResultConsumer consumer = + new ScanResultConsumer() { + @Override + public boolean onNext(Result result) { + read.incrementAndGet(); + return true; + } + + @Override + public void onError(Throwable throwable) { + scanConsumerEnded.completeExceptionally(throwable); + } + + @Override + public void onComplete() { + scanConsumerEnded.complete(null); + } + }; + + try (MirroringAsyncConnection asyncConnection = + asyncConnectionRule.createAsyncConnection(config)) { + asyncConnection + .getTableBuilder(tableName, this.executorServiceRule.executorService) + .build() + .scan(new Scan(), consumer); + scanConsumerEnded.get(); + } + + assertThat(read.get()).isEqualTo(databaseEntriesCount); + } + + private void checkIfShouldHaveThrown(CompletableFuture future, byte[] rowKey) { + assertThat(failPredicate.apply(rowKey)).isEqualTo(future.isCompletedExceptionally()); + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/AsyncConnectionRule.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/AsyncConnectionRule.java new file mode 100644 index 0000000000..151dbb31ac --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/java/com/google/cloud/bigtable/hbase/mirroring/utils/AsyncConnectionRule.java @@ -0,0 +1,42 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.hbase.mirroring.utils; + +import com.google.cloud.bigtable.mirroring.hbase2_x.MirroringAsyncConnection; +import java.util.concurrent.ExecutionException; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.client.AsyncConnection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.junit.rules.ExternalResource; + +public class AsyncConnectionRule extends ExternalResource { + private final ConnectionRule connectionRule; + + public AsyncConnectionRule(ConnectionRule connectionRule) { + this.connectionRule = connectionRule; + } + + public MirroringAsyncConnection createAsyncConnection(Configuration configuration) { + this.connectionRule.updateConfigurationWithHbaseMiniClusterProps(configuration); + + try { + AsyncConnection conn = ConnectionFactory.createAsyncConnection(configuration).get(); + return (MirroringAsyncConnection) conn; + } catch (ExecutionException | InterruptedException e) { + throw new RuntimeException(e); + } + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml new file mode 100644 index 0000000000..153730bd5a --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/bigtable-to-hbase-local-configuration.xml @@ -0,0 +1,56 @@ + + + hbase.client.connection.impl + com.google.cloud.bigtable.mirroring.hbase2_x.MirroringConnection + + + + google.bigtable.mirroring.primary-client.connection.impl + com.google.cloud.bigtable.hbase2_x.BigtableConnection + + + + google.bigtable.mirroring.primary-client.async.connection.impl + org.apache.hadoop.hbase.client.BigtableAsyncConnection + + + + google.bigtable.mirroring.secondary-client.connection.impl + default + + + + google.bigtable.mirroring.secondary-client.async.connection.impl + default + + + + google.bigtable.project.id + fake-project + + + + google.bigtable.instance.id + fake-instance + + + + google.bigtable.use.gcj.client + false + + + + hbase.client.retries.number + 2 + + + + use-hbase-mini-cluster + true + + + + google.bigtable.mirroring.write-error-log.appender.prefix-path + /tmp/write-error-log + + diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml new file mode 100644 index 0000000000..3a2411a4bc --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/hbase-to-bigtable-local-configuration.xml @@ -0,0 +1,56 @@ + + + hbase.client.connection.impl + com.google.cloud.bigtable.mirroring.hbase2_x.MirroringConnection + + + + google.bigtable.mirroring.primary-client.connection.impl + default + + + + google.bigtable.mirroring.primary-client.async.connection.impl + default + + + + google.bigtable.mirroring.secondary-client.connection.impl + com.google.cloud.bigtable.hbase2_x.BigtableConnection + + + + google.bigtable.mirroring.secondary-client.async.connection.impl + org.apache.hadoop.hbase.client.BigtableAsyncConnection + + + + google.bigtable.project.id + fake-project + + + + google.bigtable.instance.id + fake-instance + + + + google.bigtable.use.gcj.client + false + + + + hbase.client.retries.number + 2 + + + + use-hbase-mini-cluster + true + + + + google.bigtable.mirroring.write-error-log.appender.prefix-path + /tmp/write-error-log + + diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/log4j.properties b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/log4j.properties new file mode 100644 index 0000000000..f0dbf50014 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/log4j.properties @@ -0,0 +1,7 @@ +log4j.rootLogger=WARN, console + +log4j.appender.console=org.apache.log4j.ConsoleAppender +log4j.appender.console.Target=System.out +log4j.appender.console.layout=org.apache.log4j.PatternLayout +log4j.appender.console.layout.ConversionPattern=[%-20t] %-5p %-20c{1} - %m%n +log4j.logger.com.google.cloud.bigtable.mirroring=OFF diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/prometheus.yml b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/prometheus.yml new file mode 100644 index 0000000000..63c3fc1a27 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x-integration-tests/src/test/resources/prometheus.yml @@ -0,0 +1,13 @@ +global: + scrape_interval: 5s + + external_labels: + monitor: 'bigtable-mirroring-client-integration-tests' + +scrape_configs: + - job_name: 'bigtable-mirroring-client-integration-tests' + + scrape_interval: 5s + + static_configs: + - targets: ['localhost:8888'] diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncBufferedMutator.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncBufferedMutator.java new file mode 100644 index 0000000000..2415a01090 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncBufferedMutator.java @@ -0,0 +1,179 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase2_x; + +import com.google.api.core.InternalApi; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.cloud.bigtable.mirroring.hbase2_x.utils.futures.FutureConverter; +import com.google.cloud.bigtable.mirroring.hbase2_x.utils.futures.FutureUtils; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.AsyncBufferedMutator; +import org.apache.hadoop.hbase.client.Mutation; + +@InternalApi +public class MirroringAsyncBufferedMutator implements AsyncBufferedMutator { + + private final AsyncBufferedMutator primary; + private final AsyncBufferedMutator secondary; + private final FlowController flowController; + private final ListenableReferenceCounter referenceCounter; + private final SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; + private final AtomicBoolean closed = new AtomicBoolean(false); + private final Timestamper timestamper; + private final MirroringAsyncConfiguration configuration; + + public MirroringAsyncBufferedMutator( + AsyncBufferedMutator primary, + AsyncBufferedMutator secondary, + MirroringAsyncConfiguration configuration, + FlowController flowController, + SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer, + Timestamper timestamper) { + this.primary = primary; + this.secondary = secondary; + this.configuration = configuration; + this.flowController = flowController; + this.secondaryWriteErrorConsumer = secondaryWriteErrorConsumer; + this.referenceCounter = new ListenableReferenceCounter(); + this.timestamper = timestamper; + } + + @Override + public TableName getName() { + return primary.getName(); + } + + @Override + public Configuration getConfiguration() { + return primary.getConfiguration(); + } + + @Override + public CompletableFuture mutate(Mutation mutation) { + this.timestamper.fillTimestamp(mutation); + referenceCounter.incrementReferenceCount(); + CompletableFuture primaryCompleted = primary.mutate(mutation); + CompletableFuture resultFuture = new CompletableFuture<>(); + + primaryCompleted + .thenRun( + () -> { + CompletableFuture resourceRequested = + FutureConverter.toCompletable( + flowController.asyncRequestResource( + new RequestResourcesDescription(mutation))); + + resourceRequested + .thenRun( + () -> { + resultFuture.complete(null); + secondary + .mutate(mutation) + .thenRun(referenceCounter::decrementReferenceCount) + .exceptionally( + secondaryError -> { + this.secondaryWriteErrorConsumer.consume( + MirroringSpanConstants.HBaseOperation.BUFFERED_MUTATOR_MUTATE, + mutation, + FutureUtils.unwrapCompletionException(secondaryError)); + referenceCounter.decrementReferenceCount(); + return null; + }); + }) + .exceptionally( + resourceReservationError -> { + referenceCounter.decrementReferenceCount(); + resultFuture.complete(null); + this.secondaryWriteErrorConsumer.consume( + MirroringSpanConstants.HBaseOperation.BUFFERED_MUTATOR_MUTATE, + mutation, + resourceReservationError); + return null; + }); + }) + .exceptionally( + primaryError -> { + referenceCounter.decrementReferenceCount(); + resultFuture.completeExceptionally(primaryError); + return null; + }); + + return resultFuture; + } + + @Override + public List> mutate(List list) { + ArrayList> results = new ArrayList<>(list.size()); + for (Mutation mutation : list) { + results.add(mutate(mutation)); + } + return results; + } + + @Override + public void flush() { + primary.flush(); + secondary.flush(); + } + + @Override + public synchronized void close() { + if (this.closed.get()) { + return; + } + this.closed.set(true); + closeMirroringBufferedMutatorAndWaitForAsyncOperations(); + + this.primary.close(); + this.secondary.close(); + } + + @Override + public long getWriteBufferSize() { + return primary.getWriteBufferSize(); + } + + @Override + public long getPeriodicalFlushTimeout(TimeUnit unit) { + return primary.getPeriodicalFlushTimeout(unit); + } + + private void closeMirroringBufferedMutatorAndWaitForAsyncOperations() { + this.referenceCounter.decrementReferenceCount(); + try { + this.referenceCounter + .getOnLastReferenceClosed() + .get( + this.configuration.mirroringOptions.connectionTerminationTimeoutMillis, + TimeUnit.MILLISECONDS); + } catch (ExecutionException | InterruptedException | TimeoutException e) { + throw new RuntimeException(e); + } + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConfiguration.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConfiguration.java index ae45336708..65f13db21c 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConfiguration.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConfiguration.java @@ -15,79 +15,60 @@ */ package com.google.cloud.bigtable.mirroring.hbase2_x; -import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; +import com.google.api.core.InternalApi; import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringOptions; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper; import org.apache.hadoop.conf.Configuration; -public class MirroringAsyncConfiguration extends Configuration { - Configuration primaryConfiguration; - Configuration secondaryConfiguration; - MirroringOptions mirroringOptions; +@InternalApi("For internal use only") +public class MirroringAsyncConfiguration { + public final Configuration primaryConfiguration; + public final Configuration secondaryConfiguration; + public final MirroringOptions mirroringOptions; + public final Configuration baseConfiguration; - public MirroringAsyncConfiguration( - Configuration primaryConfiguration, - Configuration secondaryConfiguration, - Configuration mirroringConfiguration) { - super.set("hbase.client.connection.impl", MirroringConnection.class.getCanonicalName()); - super.set( - "hbase.client.async.connection.impl", MirroringAsyncConnection.class.getCanonicalName()); + public MirroringAsyncConfiguration(Configuration configuration) { + this.baseConfiguration = configuration; - this.primaryConfiguration = primaryConfiguration; - this.secondaryConfiguration = secondaryConfiguration; - this.mirroringOptions = new MirroringOptions(mirroringConfiguration); - } + MirroringConfigurationHelper.checkParameters( + configuration, + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY); + MirroringConfigurationHelper.checkParameters( + configuration, + MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, + MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY); - public MirroringAsyncConfiguration(Configuration conf) { - super(conf); // Copy-constructor - // In case the user constructed MirroringAsyncConfiguration by hand. - if (conf instanceof MirroringAsyncConfiguration) { - MirroringAsyncConfiguration mirroringConfiguration = (MirroringAsyncConfiguration) conf; - this.primaryConfiguration = new Configuration(mirroringConfiguration.primaryConfiguration); - this.secondaryConfiguration = - new Configuration(mirroringConfiguration.secondaryConfiguration); - this.mirroringOptions = mirroringConfiguration.mirroringOptions; - } else { - MirroringConfigurationHelper.checkParameters( - conf, - MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY); - MirroringConfigurationHelper.checkParameters( - conf, - MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, - MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY); - - final Configuration primaryConfiguration = - MirroringConfigurationHelper.extractPrefixedConfig( - MirroringConfigurationHelper.MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, conf); - MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( - primaryConfiguration, - conf, - MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, - "hbase.client.connection.impl"); - MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( - primaryConfiguration, - conf, - MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, - "hbase.client.async.connection.impl"); - this.primaryConfiguration = primaryConfiguration; + final Configuration primaryConfiguration = + MirroringConfigurationHelper.extractPrefixedConfig( + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONFIG_PREFIX_KEY, configuration); + MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( + primaryConfiguration, + configuration, + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, + "hbase.client.connection.impl"); + MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( + primaryConfiguration, + configuration, + MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, + "hbase.client.async.connection.impl"); + this.primaryConfiguration = primaryConfiguration; - final Configuration secondaryConfiguration = - MirroringConfigurationHelper.extractPrefixedConfig( - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, conf); - MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( - secondaryConfiguration, - conf, - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, - "hbase.client.connection.impl"); - MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( - secondaryConfiguration, - conf, - MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, - "hbase.client.async.connection.impl"); - this.secondaryConfiguration = secondaryConfiguration; + final Configuration secondaryConfiguration = + MirroringConfigurationHelper.extractPrefixedConfig( + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, configuration); + MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( + secondaryConfiguration, + configuration, + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, + "hbase.client.connection.impl"); + MirroringConfigurationHelper.fillConnectionConfigWithClassImplementation( + secondaryConfiguration, + configuration, + MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, + "hbase.client.async.connection.impl"); + this.secondaryConfiguration = secondaryConfiguration; - this.mirroringOptions = new MirroringOptions(conf); - } + this.mirroringOptions = new MirroringOptions(configuration); } } diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConnection.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConnection.java index 62b725e2c4..507cf304e0 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConnection.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncConnection.java @@ -15,20 +15,25 @@ */ package com.google.cloud.bigtable.mirroring.hbase2_x; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.AccumulatedExceptions; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.Logger; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.Logger; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.faillog.FailedMutationLogger; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.reflection.ReflectionConstructor; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import com.google.common.util.concurrent.MoreExecutors; import java.io.IOException; -import java.io.InterruptedIOException; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.BiFunction; import java.util.function.Function; @@ -37,6 +42,7 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.AdvancedScanResultConsumer; import org.apache.hadoop.hbase.client.AsyncAdminBuilder; +import org.apache.hadoop.hbase.client.AsyncBufferedMutator; import org.apache.hadoop.hbase.client.AsyncBufferedMutatorBuilder; import org.apache.hadoop.hbase.client.AsyncConnection; import org.apache.hadoop.hbase.client.AsyncTable; @@ -49,6 +55,8 @@ import org.apache.hadoop.hbase.security.User; public class MirroringAsyncConnection implements AsyncConnection { + private static final Logger Log = new Logger(MirroringAsyncConnection.class); + private final MirroringAsyncConfiguration configuration; private final AsyncConnection primaryConnection; private final AsyncConnection secondaryConnection; @@ -58,6 +66,9 @@ public class MirroringAsyncConnection implements AsyncConnection { private final SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; private final MirroringTracer mirroringTracer; private final AtomicBoolean closed = new AtomicBoolean(false); + private final ReadSampler readSampler; + private final ExecutorService executorService; + private final Timestamper timestamper; /** * The constructor called from {@link @@ -76,7 +87,7 @@ public MirroringAsyncConnection( Object ignoredRegistry, String ignoredClusterId, User user) - throws ExecutionException, InterruptedException { + throws Throwable { this.configuration = new MirroringAsyncConfiguration(conf); this.mirroringTracer = new MirroringTracer(); @@ -91,33 +102,62 @@ public MirroringAsyncConnection( this.referenceCounter = new ListenableReferenceCounter(); this.flowController = new FlowController( - ReflectionConstructor.construct( - this.configuration.mirroringOptions.flowControllerStrategyClass, - this.configuration.mirroringOptions)); + this.configuration + .mirroringOptions + .flowControllerStrategyFactoryClass + .newInstance() + .create(this.configuration.mirroringOptions)); this.mismatchDetector = - ReflectionConstructor.construct( - this.configuration.mirroringOptions.mismatchDetectorClass, this.mirroringTracer); - - Logger failedWritesLogger = - new Logger( - ReflectionConstructor.construct( - this.configuration.mirroringOptions.writeErrorLogAppenderClass, - Configuration.class, - this.configuration), - ReflectionConstructor.construct( - this.configuration.mirroringOptions.writeErrorLogSerializerClass)); + this.configuration + .mirroringOptions + .mismatchDetectorFactoryClass + .newInstance() + .create( + this.mirroringTracer, + this.configuration.mirroringOptions.maxLoggedBinaryValueLength); + + FailedMutationLogger failedMutationLogger = + new FailedMutationLogger( + this.configuration + .mirroringOptions + .faillog + .writeErrorLogAppenderFactoryClass + .newInstance() + .create(this.configuration.mirroringOptions.faillog), + this.configuration + .mirroringOptions + .faillog + .writeErrorLogSerializerFactoryClass + .newInstance() + .create()); SecondaryWriteErrorConsumer writeErrorConsumer = - ReflectionConstructor.construct( - this.configuration.mirroringOptions.writeErrorConsumerClass, failedWritesLogger); + this.configuration + .mirroringOptions + .writeErrorConsumerFactoryClass + .newInstance() + .create(failedMutationLogger); this.secondaryWriteErrorConsumer = new SecondaryWriteErrorConsumerWithMetrics(this.mirroringTracer, writeErrorConsumer); + + this.readSampler = new ReadSampler(this.configuration.mirroringOptions.readSamplingRate); + this.executorService = Executors.newCachedThreadPool(); + this.timestamper = + Timestamper.create(this.configuration.mirroringOptions.enableDefaultClientSideTimestamps); + } + + public AsyncConnection getPrimaryConnection() { + return this.primaryConnection; + } + + public AsyncConnection getSecondaryConnection() { + return this.secondaryConnection; } @Override public Configuration getConfiguration() { - return this.configuration; + return this.configuration.baseConfiguration; } @Override @@ -131,18 +171,49 @@ public void close() throws IOException { return; } + final AccumulatedExceptions exceptions = new AccumulatedExceptions(); + try { + primaryConnection.close(); + } catch (IOException e) { + exceptions.add(e); + } + + CompletableFuture closingFinishedFuture = new CompletableFuture<>(); + + // The secondary connection can only be closed after all in-flight requests are finished. + this.referenceCounter + .getOnLastReferenceClosed() + .addListener( + () -> { + try { + secondaryConnection.close(); + closingFinishedFuture.complete(null); + } catch (IOException e) { + closingFinishedFuture.completeExceptionally(e); + } + }, + MoreExecutors.directExecutor()); + this.referenceCounter.decrementReferenceCount(); try { - this.referenceCounter.getOnLastReferenceClosed().get(); - this.primaryConnection.close(); - this.secondaryConnection.close(); - } catch (InterruptedException e) { - IOException wrapperException = new InterruptedIOException(); - wrapperException.initCause(e); - throw wrapperException; - } catch (ExecutionException e) { - throw new RuntimeException(e); + // Wait for in-flight requests to be finished but with a timeout to prevent deadlock. + closingFinishedFuture.get( + this.configuration.mirroringOptions.connectionTerminationTimeoutMillis, + TimeUnit.MILLISECONDS); + } catch (ExecutionException | InterruptedException e) { + // If the secondary close has thrown while we were waiting, the error will be + // propagated to the user. + exceptions.add(new IOException(e)); + } catch (TimeoutException e) { + // But if the timeout was reached, we just leave the operation pending. + Log.error( + "MirroringAsyncConnection#close() timed out. Some of operations on secondary " + + "database are still in-flight and might be lost, but are not cancelled and " + + "will be performed asynchronously until the program terminates."); + // This error is not reported to the user. } + + exceptions.rethrowIfCaptured(); } public AsyncTableBuilder getTableBuilder(TableName tableName) { @@ -160,33 +231,35 @@ public AsyncTableBuilder getTableBuilder( } @Override - public AsyncTableRegionLocator getRegionLocator(TableName tableName) { - throw new UnsupportedOperationException(); + public AsyncBufferedMutatorBuilder getBufferedMutatorBuilder(TableName tableName) { + return new MirroringAsyncBufferedMutatorBuilder( + this.primaryConnection.getBufferedMutatorBuilder(tableName), + this.secondaryConnection.getBufferedMutatorBuilder(tableName)); } @Override - public void clearRegionLocationCache() { - throw new UnsupportedOperationException(); + public AsyncBufferedMutatorBuilder getBufferedMutatorBuilder( + TableName tableName, ExecutorService executorService) { + return getBufferedMutatorBuilder(tableName); } @Override - public AsyncAdminBuilder getAdminBuilder() { - throw new UnsupportedOperationException(); + public AsyncTableRegionLocator getRegionLocator(TableName tableName) { + return this.primaryConnection.getRegionLocator(tableName); } @Override - public AsyncAdminBuilder getAdminBuilder(ExecutorService executorService) { + public void clearRegionLocationCache() { throw new UnsupportedOperationException(); } @Override - public AsyncBufferedMutatorBuilder getBufferedMutatorBuilder(TableName tableName) { + public AsyncAdminBuilder getAdminBuilder() { throw new UnsupportedOperationException(); } @Override - public AsyncBufferedMutatorBuilder getBufferedMutatorBuilder( - TableName tableName, ExecutorService executorService) { + public AsyncAdminBuilder getAdminBuilder(ExecutorService executorService) { throw new UnsupportedOperationException(); } @@ -201,7 +274,7 @@ public Hbck getHbck(ServerName serverName) throws IOException { } private class MirroringAsyncTableBuilder - implements AsyncTableBuilder { + extends BuilderParameterSetter> implements AsyncTableBuilder { private final AsyncTableBuilder primaryTableBuilder; private final AsyncTableBuilder secondaryTableBuilder; @@ -213,112 +286,220 @@ public MirroringAsyncTableBuilder( @Override public AsyncTable build() { - return new MirroringAsyncTable( + return new MirroringAsyncTable<>( this.primaryTableBuilder.build(), this.secondaryTableBuilder.build(), mismatchDetector, flowController, secondaryWriteErrorConsumer, mirroringTracer, - referenceCounter); - } - - private AsyncTableBuilder setTimeParameter( - long timeAmount, - TimeUnit timeUnit, - BiFunction> primaryFunction, - BiFunction> secondaryFunction) { - primaryFunction.apply(timeAmount, timeUnit); - secondaryFunction.apply(timeAmount, timeUnit); - return this; - } - - private AsyncTableBuilder setIntegerParameter( - int value, - Function> primaryFunction, - Function> secondaryFunction) { - primaryFunction.apply(value); - secondaryFunction.apply(value); - return this; + readSampler, + timestamper, + referenceCounter, + executorService, + configuration.mirroringOptions.resultScannerBufferedMismatchedResults); } @Override public AsyncTableBuilder setOperationTimeout(long timeout, TimeUnit unit) { - return setTimeParameter( + setTimeParameter( timeout, unit, this.primaryTableBuilder::setOperationTimeout, this.secondaryTableBuilder::setOperationTimeout); + return this; } @Override public AsyncTableBuilder setScanTimeout(long timeout, TimeUnit unit) { - return setTimeParameter( + setTimeParameter( timeout, unit, this.primaryTableBuilder::setScanTimeout, this.secondaryTableBuilder::setScanTimeout); + return this; } @Override public AsyncTableBuilder setRpcTimeout(long timeout, TimeUnit unit) { - return setTimeParameter( + setTimeParameter( timeout, unit, this.primaryTableBuilder::setRpcTimeout, this.secondaryTableBuilder::setRpcTimeout); + return this; } @Override public AsyncTableBuilder setReadRpcTimeout(long timeout, TimeUnit unit) { - return setTimeParameter( + setTimeParameter( timeout, unit, this.primaryTableBuilder::setReadRpcTimeout, this.secondaryTableBuilder::setReadRpcTimeout); + return this; } @Override public AsyncTableBuilder setWriteRpcTimeout(long timeout, TimeUnit unit) { - return setTimeParameter( + setTimeParameter( timeout, unit, this.primaryTableBuilder::setWriteRpcTimeout, this.secondaryTableBuilder::setWriteRpcTimeout); + return this; } @Override public AsyncTableBuilder setRetryPause(long pause, TimeUnit unit) { - return setTimeParameter( + setTimeParameter( pause, unit, this.primaryTableBuilder::setRetryPause, this.secondaryTableBuilder::setRetryPause); + return this; } @Override public AsyncTableBuilder setRetryPauseForCQTBE(long pause, TimeUnit unit) { - return setTimeParameter( + setTimeParameter( pause, unit, this.primaryTableBuilder::setRetryPauseForCQTBE, this.secondaryTableBuilder::setRetryPauseForCQTBE); + return this; } @Override public AsyncTableBuilder setMaxAttempts(int maxAttempts) { - return setIntegerParameter( + setIntegerParameter( maxAttempts, this.primaryTableBuilder::setMaxAttempts, this.secondaryTableBuilder::setMaxAttempts); + return this; } @Override public AsyncTableBuilder setStartLogErrorsCnt(int maxRetries) { - return setIntegerParameter( + setIntegerParameter( maxRetries, this.primaryTableBuilder::setStartLogErrorsCnt, this.secondaryTableBuilder::setStartLogErrorsCnt); + return this; + } + } + + private class MirroringAsyncBufferedMutatorBuilder + extends BuilderParameterSetter + implements AsyncBufferedMutatorBuilder { + private final AsyncBufferedMutatorBuilder primaryMutatorBuilder; + private final AsyncBufferedMutatorBuilder secondaryMutatorBuilder; + + public MirroringAsyncBufferedMutatorBuilder( + AsyncBufferedMutatorBuilder primaryMutatorBuilder, + AsyncBufferedMutatorBuilder secondaryMutatorBuilder) { + this.primaryMutatorBuilder = primaryMutatorBuilder; + this.secondaryMutatorBuilder = secondaryMutatorBuilder; + } + + @Override + public AsyncBufferedMutator build() { + return new MirroringAsyncBufferedMutator( + this.primaryMutatorBuilder.build(), + this.secondaryMutatorBuilder.build(), + configuration, + flowController, + secondaryWriteErrorConsumer, + timestamper); + } + + @Override + public AsyncBufferedMutatorBuilder setOperationTimeout(long timeout, TimeUnit unit) { + setTimeParameter( + timeout, + unit, + this.primaryMutatorBuilder::setOperationTimeout, + this.secondaryMutatorBuilder::setOperationTimeout); + return this; + } + + @Override + public AsyncBufferedMutatorBuilder setRpcTimeout(long timeout, TimeUnit unit) { + setTimeParameter( + timeout, + unit, + this.primaryMutatorBuilder::setRpcTimeout, + this.secondaryMutatorBuilder::setRpcTimeout); + return this; + } + + @Override + public AsyncBufferedMutatorBuilder setRetryPause(long pause, TimeUnit unit) { + setTimeParameter( + pause, + unit, + this.primaryMutatorBuilder::setRetryPause, + this.secondaryMutatorBuilder::setRetryPause); + return this; + } + + @Override + public AsyncBufferedMutatorBuilder setWriteBufferSize(long writeBufferSize) { + setLongParameter( + writeBufferSize, + this.primaryMutatorBuilder::setWriteBufferSize, + this.secondaryMutatorBuilder::setWriteBufferSize); + return this; + } + + @Override + public AsyncBufferedMutatorBuilder setMaxAttempts(int maxAttempts) { + setIntegerParameter( + maxAttempts, + this.primaryMutatorBuilder::setMaxAttempts, + this.secondaryMutatorBuilder::setMaxAttempts); + return this; + } + + @Override + public AsyncBufferedMutatorBuilder setStartLogErrorsCnt(int startLogErrorsCnt) { + setIntegerParameter( + startLogErrorsCnt, + this.primaryMutatorBuilder::setStartLogErrorsCnt, + this.secondaryMutatorBuilder::setStartLogErrorsCnt); + return this; + } + + @Override + public AsyncBufferedMutatorBuilder setMaxKeyValueSize(int maxKeyValueSize) { + setIntegerParameter( + maxKeyValueSize, + this.primaryMutatorBuilder::setMaxKeyValueSize, + this.secondaryMutatorBuilder::setMaxKeyValueSize); + return this; + } + } + + private static class BuilderParameterSetter { + protected void setTimeParameter( + long timeAmount, + TimeUnit timeUnit, + BiFunction primaryFunction, + BiFunction secondaryFunction) { + primaryFunction.apply(timeAmount, timeUnit); + secondaryFunction.apply(timeAmount, timeUnit); + } + + protected void setIntegerParameter( + int value, Function primaryFunction, Function secondaryFunction) { + primaryFunction.apply(value); + secondaryFunction.apply(value); + } + + protected void setLongParameter( + long value, Function primaryFunction, Function secondaryFunction) { + primaryFunction.apply(value); + secondaryFunction.apply(value); } } } diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncTable.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncTable.java index 06a5f69126..683160a21b 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncTable.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringAsyncTable.java @@ -15,31 +15,40 @@ */ package com.google.cloud.bigtable.mirroring.hbase2_x; +import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils.emptyResult; import static com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils.makePutFromResult; import static com.google.cloud.bigtable.mirroring.hbase2_x.utils.AsyncRequestScheduling.reserveFlowControlResourcesThenScheduleSecondary; -import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringResultScanner; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable.RequestScheduler; import com.google.cloud.bigtable.mirroring.hbase1_x.WriteOperationFutureCallback; +import com.google.cloud.bigtable.mirroring.hbase1_x.asyncwrappers.AsyncResultScannerWrapper; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.FailedSuccessfulSplit; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.BatchHelpers.ReadWriteSplit; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.WriteOperationInfo; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.VerificationContinuationFactory; -import com.google.cloud.bigtable.mirroring.hbase2_x.utils.AsyncRequestScheduling; +import com.google.cloud.bigtable.mirroring.hbase2_x.utils.AsyncRequestScheduling.OperationStages; import com.google.cloud.bigtable.mirroring.hbase2_x.utils.futures.FutureConverter; import com.google.cloud.bigtable.mirroring.hbase2_x.utils.futures.FutureUtils; import com.google.common.base.Predicate; import com.google.common.util.concurrent.FutureCallback; +import com.google.common.util.concurrent.MoreExecutors; import java.util.ArrayList; import java.util.List; import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionException; +import java.util.concurrent.ExecutorService; import java.util.concurrent.TimeUnit; import java.util.function.BiConsumer; import java.util.function.Consumer; @@ -66,17 +75,29 @@ import org.apache.hadoop.hbase.client.ServiceCaller; import org.apache.hadoop.hbase.io.TimeRange; import org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcChannel; +import org.checkerframework.checker.nullness.compatqual.NullableDecl; public class MirroringAsyncTable implements AsyncTable { private final Predicate resultIsFaultyPredicate = (o) -> o instanceof Throwable; + private final AsyncTable primaryTable; private final AsyncTable secondaryTable; private final VerificationContinuationFactory verificationContinuationFactory; private final FlowController flowController; private final SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; private final MirroringTracer mirroringTracer; + /** + * HBase 2.x AsyncTables are not closeable and we do not need keep a separate reference counter + * for it, but we can just use MirroringAsyncConnection reference counter. + */ private final ListenableReferenceCounter referenceCounter; + private final ReadSampler readSampler; + private final ExecutorService executorService; + private final RequestScheduler requestScheduler; + private final int resultScannerBufferedMismatchedResults; + private final Timestamper timestamper; + public MirroringAsyncTable( AsyncTable primaryTable, AsyncTable secondaryTable, @@ -84,7 +105,11 @@ public MirroringAsyncTable( FlowController flowController, SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer, MirroringTracer mirroringTracer, - ListenableReferenceCounter referenceCounter) { + ReadSampler readSampler, + Timestamper timestamper, + ListenableReferenceCounter referenceCounter, + ExecutorService executorService, + int resultScannerBufferedMismatchedResults) { this.primaryTable = primaryTable; this.secondaryTable = secondaryTable; this.verificationContinuationFactory = new VerificationContinuationFactory(mismatchDetector); @@ -92,6 +117,17 @@ public MirroringAsyncTable( this.secondaryWriteErrorConsumer = secondaryWriteErrorConsumer; this.mirroringTracer = mirroringTracer; this.referenceCounter = referenceCounter; + this.readSampler = readSampler; + this.executorService = executorService; + this.requestScheduler = + new RequestScheduler(this.flowController, this.mirroringTracer, this.referenceCounter); + this.resultScannerBufferedMismatchedResults = resultScannerBufferedMismatchedResults; + this.timestamper = timestamper; + } + + @Override + public TableName getName() { + return this.primaryTable.getName(); } @Override @@ -118,11 +154,10 @@ public CompletableFuture exists(Get get) { @Override public CompletableFuture put(Put put) { + this.timestamper.fillTimestamp(put); CompletableFuture primaryFuture = this.primaryTable.put(put); return writeWithFlowControl( - new MirroringTable.WriteOperationInfo(put), - primaryFuture, - () -> this.secondaryTable.put(put)) + new WriteOperationInfo(put), primaryFuture, () -> this.secondaryTable.put(put)) .userNotified; } @@ -130,29 +165,36 @@ public CompletableFuture put(Put put) { public CompletableFuture delete(Delete delete) { CompletableFuture primaryFuture = this.primaryTable.delete(delete); return writeWithFlowControl( - new MirroringTable.WriteOperationInfo(delete), - primaryFuture, - () -> this.secondaryTable.delete(delete)) + new WriteOperationInfo(delete), primaryFuture, () -> this.secondaryTable.delete(delete)) .userNotified; } @Override public CompletableFuture append(Append append) { - CompletableFuture primaryFuture = this.primaryTable.append(append); - return mutationAsPut(primaryFuture).userNotified; + boolean wantsResults = append.isReturnResults(); + CompletableFuture primaryFuture = + this.primaryTable.append(append.setReturnResults(true)); + return mutationAsPut(primaryFuture) + .userNotified + .thenApply(primaryResult -> wantsResults ? primaryResult : emptyResult()); } @Override public CompletableFuture increment(Increment increment) { - CompletableFuture primaryFuture = this.primaryTable.increment(increment); - return mutationAsPut(primaryFuture).userNotified; + boolean wantsResults = increment.isReturnResults(); + CompletableFuture primaryFuture = + this.primaryTable.increment(increment.setReturnResults(true)); + return mutationAsPut(primaryFuture) + .userNotified + .thenApply(primaryResult -> wantsResults ? primaryResult : emptyResult()); } @Override public CompletableFuture mutateRow(RowMutations rowMutations) { + this.timestamper.fillTimestamp(rowMutations); CompletableFuture primaryFuture = this.primaryTable.mutateRow(rowMutations); return writeWithFlowControl( - new MirroringTable.WriteOperationInfo(rowMutations), + new WriteOperationInfo(rowMutations), primaryFuture, () -> this.secondaryTable.mutateRow(rowMutations)) .userNotified; @@ -160,30 +202,77 @@ public CompletableFuture mutateRow(RowMutations rowMutations) { @Override public List> get(List list) { - return batch(list); + return generalBatch( + list, + this.primaryTable::get, + this.secondaryTable::get, + BatchBuilder::new, + Result.class) + .userNotified; } @Override public List> put(List list) { - return batch(list); + return generalBatch( + list, + this.primaryTable::put, + this.secondaryTable::put, + BatchBuilder::new, + Void.class) + .userNotified; } @Override public List> delete(List list) { - return batch(list); + return generalBatch( + list, + this.primaryTable::delete, + this.secondaryTable::delete, + BatchBuilder::new, + Void.class) + .userNotified; } @Override public List> batch(List actions) { - final int numActions = actions.size(); - final AsyncRequestScheduling.OperationStages>> returnedValue = - new AsyncRequestScheduling.OperationStages<>( - Stream.generate((Supplier>) CompletableFuture::new) - .limit(numActions) - .collect(Collectors.toCollection(ArrayList::new))); - - final List> primaryFutures = this.primaryTable.batch(actions); - // Unfortunately, we cannot create T[]. + return this.generalBatch( + actions, + this.primaryTable::batch, + this.secondaryTable::batch, + BatchBuilder::new, + Object.class) + .userNotified; + } + + @Override + public List> exists(List list) { + return generalBatch( + list, + this.primaryTable::exists, + this.secondaryTable::exists, + ExistsBuilder::new, + Boolean.class) + .userNotified; + } + + public + OperationStages>> generalBatch( + List userActions, + Function, List>> primaryFunction, + Function, List>> secondaryFunction, + Function, GeneralBatchBuilder> + batchBuilderCreator, + Class successfulResultTypeClass) { + userActions = this.timestamper.fillTimestamp(userActions); + OperationUtils.RewrittenIncrementAndAppendIndicesInfo actions = + new OperationUtils.RewrittenIncrementAndAppendIndicesInfo<>(userActions); + final int numActions = actions.operations.size(); + + final OperationStages>> returnedValue = + new OperationStages<>(generateList(numActions, CompletableFuture::new)); + + final List> primaryFutures = + primaryFunction.apply(actions.operations); final Object[] primaryResults = new Object[numActions]; BiConsumer primaryErrorHandler = @@ -191,31 +280,47 @@ public List> batch(List actions) { waitForAllWithErrorHandler(primaryFutures, primaryErrorHandler, primaryResults) .whenComplete( (ignoredResult, ignoredError) -> { - final FailedSuccessfulSplit failedSuccessfulSplit = - new FailedSuccessfulSplit<>(actions, primaryResults, resultIsFaultyPredicate); - - if (failedSuccessfulSplit.successfulOperations.size() == 0) { - // All results were instances of Throwable, so we already completed - // exceptionally result futures by errorHandler passed to + boolean skipReads = !readSampler.shouldNextReadOperationBeSampled(); + final FailedSuccessfulSplit failedSuccessfulSplit = + BatchHelpers.createOperationsSplit( + actions.operations, + primaryResults, + resultIsFaultyPredicate, + successfulResultTypeClass, + skipReads); + + if (failedSuccessfulSplit.successfulOperations.isEmpty()) { + // Two possible cases: + // - Everything failed - all primary results were instances of Throwable and + // we already completed exceptionally result futures with errorHandler passed to // waitForAllWithErrorHandler. - returnedValue.getVerificationCompletedFuture().complete(null); + // - Reads were successful but were excluded from the split due to sampling and we + // should forward primary results to a list returned to the user. + if (skipReads) { + completeSuccessfulResultFutures( + returnedValue.userNotified, primaryResults, actions); + } + returnedValue.verificationCompleted(); return; } - final List operationsToScheduleOnSecondary = - failedSuccessfulSplit.successfulOperations; + GeneralBatchBuilder batchBuilder = batchBuilderCreator.apply(failedSuccessfulSplit); + + final List operationsToScheduleOnSecondary = + BatchHelpers.rewriteIncrementsAndAppendsAsPuts( + failedSuccessfulSplit.successfulOperations, + failedSuccessfulSplit.successfulResults); final Object[] secondaryResults = new Object[operationsToScheduleOnSecondary.size()]; - final ReadWriteSplit successfulReadWriteSplit = + ReadWriteSplit successfulReadWriteSplit = new ReadWriteSplit<>( failedSuccessfulSplit.successfulOperations, failedSuccessfulSplit.successfulResults, - Result.class); + successfulResultTypeClass); final RequestResourcesDescription requestResourcesDescription = - new RequestResourcesDescription( - operationsToScheduleOnSecondary, successfulReadWriteSplit.readResults); + batchBuilder.getRequestResourcesDescription(operationsToScheduleOnSecondary); final CompletableFuture resourceReservationRequest = @@ -225,12 +330,14 @@ public List> batch(List actions) { resourceReservationRequest.whenComplete( (ignoredResourceReservation, resourceReservationError) -> { completeSuccessfulResultFutures( - returnedValue.userNotified, primaryResults, numActions); + returnedValue.userNotified, primaryResults, actions); if (resourceReservationError != null) { - this.secondaryWriteErrorConsumer.consume( - HBaseOperation.BATCH, - successfulReadWriteSplit.writeOperations, - resourceReservationError); + if (!successfulReadWriteSplit.writeOperations.isEmpty()) { + this.secondaryWriteErrorConsumer.consume( + HBaseOperation.BATCH, + successfulReadWriteSplit.writeOperations, + resourceReservationError); + } return; } FutureUtils.forwardResult( @@ -239,28 +346,31 @@ public List> batch(List actions) { resourceReservationRequest, () -> waitForAllWithErrorHandler( - this.secondaryTable.batch(operationsToScheduleOnSecondary), + secondaryFunction.apply(operationsToScheduleOnSecondary), (idx, throwable) -> {}, secondaryResults), (ignoredPrimaryResult) -> - BatchHelpers.createBatchVerificationCallback( - failedSuccessfulSplit, - successfulReadWriteSplit, - secondaryResults, - verificationContinuationFactory.getMismatchDetector(), - secondaryWriteErrorConsumer, - resultIsFaultyPredicate, - mirroringTracer)) + batchBuilder.getVerificationCallback(secondaryResults)) .getVerificationCompletedFuture(), returnedValue.getVerificationCompletedFuture()); }); }); - return wrapWithReferenceCounter(returnedValue).userNotified; + return wrapWithReferenceCounter(returnedValue); } - private void completeSuccessfulResultFutures( - List> resultFutures, Object[] primaryResults, int numResults) { - for (int i = 0; i < numResults; i++) { + private ArrayList generateList(int size, Supplier initializer) { + return Stream.generate(initializer) + .limit(size) + .collect(Collectors.toCollection(ArrayList::new)); + } + + private void completeSuccessfulResultFutures( + List> resultFutures, + Object[] primaryResults, + OperationUtils.RewrittenIncrementAndAppendIndicesInfo + rewrittenIncrementAndAppendIndicesInfo) { + rewrittenIncrementAndAppendIndicesInfo.discardUnwantedResults(primaryResults); + for (int i = 0; i < primaryResults.length; i++) { if (!(resultIsFaultyPredicate.apply(primaryResults[i]))) { resultFutures.get(i).complete((T) primaryResults[i]); } @@ -292,18 +402,17 @@ private CompletableFuture waitForAllWithErrorHandler( return CompletableFuture.allOf(handledFutures.toArray(new CompletableFuture[0])); } - private - AsyncRequestScheduling.OperationStages> mutationAsPut( - CompletableFuture primaryFuture) { - AsyncRequestScheduling.OperationStages> returnedValue = - new AsyncRequestScheduling.OperationStages<>(new CompletableFuture<>()); + private OperationStages> mutationAsPut( + CompletableFuture primaryFuture) { + OperationStages> returnedValue = + new OperationStages<>(new CompletableFuture<>()); primaryFuture .thenAccept( (primaryResult) -> { Put put = makePutFromResult(primaryResult); FutureUtils.forwardResult( writeWithFlowControl( - new MirroringTable.WriteOperationInfo(put), + new WriteOperationInfo(put), CompletableFuture.completedFuture(primaryResult), () -> this.secondaryTable.put(put).thenApply(ignored -> null)), returnedValue); @@ -317,15 +426,13 @@ AsyncRequestScheduling.OperationStages> mutationAsPut( return wrapWithReferenceCounter(returnedValue); } - private - AsyncRequestScheduling.OperationStages> - readWithVerificationAndFlowControl( - final Function resourcesDescriptionCreator, - final CompletableFuture primaryFuture, - final Supplier> secondaryFutureSupplier, - final Function> verificationCallbackCreator) { - AsyncRequestScheduling.OperationStages> returnedValue = - new AsyncRequestScheduling.OperationStages<>(new CompletableFuture<>()); + private OperationStages> readWithVerificationAndFlowControl( + final Function resourcesDescriptionCreator, + final CompletableFuture primaryFuture, + final Supplier> secondaryFutureSupplier, + final Function> verificationCallbackCreator) { + OperationStages> returnedValue = + new OperationStages<>(new CompletableFuture<>()); primaryFuture.whenComplete( (primaryResult, primaryError) -> { if (primaryError != null) { @@ -333,6 +440,11 @@ AsyncRequestScheduling.OperationStages> mutationAsPut( returnedValue.verificationCompleted(); return; } + if (!this.readSampler.shouldNextReadOperationBeSampled()) { + returnedValue.userNotified.complete(primaryResult); + returnedValue.verificationCompleted(); + return; + } FutureUtils.forwardResult( reserveFlowControlResourcesThenScheduleSecondary( primaryFuture, @@ -346,8 +458,8 @@ AsyncRequestScheduling.OperationStages> mutationAsPut( return wrapWithReferenceCounter(returnedValue); } - private AsyncRequestScheduling.OperationStages> writeWithFlowControl( - final MirroringTable.WriteOperationInfo writeOperationInfo, + private OperationStages> writeWithFlowControl( + final WriteOperationInfo writeOperationInfo, final CompletableFuture primaryFuture, final Supplier> secondaryFutureSupplier) { final Consumer secondaryWriteErrorHandler = @@ -371,51 +483,32 @@ public void onFailure(Throwable throwable) { secondaryWriteErrorHandler)); } - private AsyncRequestScheduling.OperationStages wrapWithReferenceCounter( - AsyncRequestScheduling.OperationStages toBeReferenceCounted) { + private OperationStages wrapWithReferenceCounter(OperationStages toBeReferenceCounted) { keepReferenceUntilOperationCompletes(toBeReferenceCounted.getVerificationCompletedFuture()); return toBeReferenceCounted; } - private void keepReferenceUntilOperationCompletes(CompletableFuture future) { + private void keepReferenceUntilOperationCompletes(CompletableFuture future) { this.referenceCounter.incrementReferenceCount(); future.whenComplete( (ignoredResult, ignoredError) -> this.referenceCounter.decrementReferenceCount()); } @Override - public TableName getName() { - throw new UnsupportedOperationException(); - } - - @Override - public Configuration getConfiguration() { - throw new UnsupportedOperationException(); - } - - @Override - public long getRpcTimeout(TimeUnit timeUnit) { - throw new UnsupportedOperationException(); - } - - @Override - public long getReadRpcTimeout(TimeUnit timeUnit) { - throw new UnsupportedOperationException(); - } - - @Override - public long getWriteRpcTimeout(TimeUnit timeUnit) { - throw new UnsupportedOperationException(); - } - - @Override - public long getOperationTimeout(TimeUnit timeUnit) { - throw new UnsupportedOperationException(); - } - - @Override - public long getScanTimeout(TimeUnit timeUnit) { - throw new UnsupportedOperationException(); + public ResultScanner getScanner(Scan scan) { + return new MirroringResultScanner( + scan, + this.primaryTable.getScanner(scan), + new AsyncResultScannerWrapper( + this.secondaryTable.getScanner(scan), + MoreExecutors.listeningDecorator(this.executorService), + mirroringTracer), + this.verificationContinuationFactory, + this.mirroringTracer, + this.readSampler.shouldNextReadOperationBeSampled(), + this.requestScheduler, + this.referenceCounter, + this.resultScannerBufferedMismatchedResults); } @Override @@ -424,32 +517,20 @@ public CheckAndMutateBuilder checkAndMutate(byte[] row, byte[] family) { } @Override - public void scan(Scan scan, C c) { - throw new UnsupportedOperationException(); - } - - @Override - public ResultScanner getScanner(Scan scan) { - throw new UnsupportedOperationException(); + public void scan(Scan scan, C consumer) { + this.primaryTable.scan(scan, consumer); } @Override public CompletableFuture> scanAll(Scan scan) { - throw new UnsupportedOperationException(); + CompletableFuture> result = this.primaryTable.scanAll(scan); + keepReferenceUntilOperationCompletes(result); + return result; } @Override - public CompletableFuture coprocessorService( - Function function, ServiceCaller serviceCaller, byte[] bytes) { - throw new UnsupportedOperationException(); - } - - @Override - public CoprocessorServiceBuilder coprocessorService( - Function function, - ServiceCaller serviceCaller, - CoprocessorCallback coprocessorCallback) { - throw new UnsupportedOperationException(); + public Configuration getConfiguration() { + return primaryTable.getConfiguration(); } private class MirroringCheckAndMutateBuilder implements CheckAndMutateBuilder { @@ -459,12 +540,12 @@ public MirroringCheckAndMutateBuilder(CheckAndMutateBuilder primaryBuilder) { this.primaryBuilder = primaryBuilder; } - private AsyncRequestScheduling.OperationStages> checkAndMutate( - MirroringTable.WriteOperationInfo writeOperationInfo, + private OperationStages> checkAndMutate( + WriteOperationInfo writeOperationInfo, CompletableFuture primary, Supplier> secondary) { - AsyncRequestScheduling.OperationStages> returnedValue = - new AsyncRequestScheduling.OperationStages<>(new CompletableFuture<>()); + OperationStages> returnedValue = + new OperationStages<>(new CompletableFuture<>()); primary .thenAccept( (wereMutationsApplied) -> { @@ -491,8 +572,9 @@ private AsyncRequestScheduling.OperationStages> check @Override public CompletableFuture thenPut(Put put) { + timestamper.fillTimestamp(put); return checkAndMutate( - new MirroringTable.WriteOperationInfo(put), + new WriteOperationInfo(put), this.primaryBuilder.thenPut(put), () -> secondaryTable.put(put)) .userNotified; @@ -501,7 +583,7 @@ public CompletableFuture thenPut(Put put) { @Override public CompletableFuture thenDelete(Delete delete) { return checkAndMutate( - new MirroringTable.WriteOperationInfo(delete), + new WriteOperationInfo(delete), this.primaryBuilder.thenDelete(delete), () -> secondaryTable.delete(delete)) .userNotified; @@ -509,8 +591,9 @@ public CompletableFuture thenDelete(Delete delete) { @Override public CompletableFuture thenMutate(RowMutations rowMutations) { + timestamper.fillTimestamp(rowMutations); return checkAndMutate( - new MirroringTable.WriteOperationInfo(rowMutations), + new WriteOperationInfo(rowMutations), this.primaryBuilder.thenMutate(rowMutations), () -> secondaryTable.mutateRow(rowMutations)) .userNotified; @@ -540,4 +623,131 @@ public CheckAndMutateBuilder ifMatches(CompareOperator compareOperator, byte[] b return this; } } + + private interface GeneralBatchBuilder { + RequestResourcesDescription getRequestResourcesDescription( + List operationsToPerformOnSecondary); + + FutureCallback getVerificationCallback(Object[] secondaryResults); + } + + private class BatchBuilder + implements GeneralBatchBuilder { + final FailedSuccessfulSplit failedSuccessfulSplit; + final ReadWriteSplit successfulReadWriteSplit; + + BatchBuilder(FailedSuccessfulSplit split) { + this.failedSuccessfulSplit = split; + this.successfulReadWriteSplit = + new ReadWriteSplit<>( + failedSuccessfulSplit.successfulOperations, + failedSuccessfulSplit.successfulResults, + Result.class); + } + + @Override + public RequestResourcesDescription getRequestResourcesDescription( + List operationsToPerformOnSecondary) { + return new RequestResourcesDescription( + operationsToPerformOnSecondary, successfulReadWriteSplit.readResults); + } + + @Override + public FutureCallback getVerificationCallback(Object[] secondaryResults) { + return BatchHelpers.createBatchVerificationCallback( + this.failedSuccessfulSplit, + this.successfulReadWriteSplit, + secondaryResults, + verificationContinuationFactory.getMismatchDetector(), + secondaryWriteErrorConsumer, + resultIsFaultyPredicate, + mirroringTracer); + } + } + + private class ExistsBuilder implements GeneralBatchBuilder { + final FailedSuccessfulSplit primaryFailedSuccessfulSplit; + final boolean[] primarySuccessfulResults; + + ExistsBuilder(FailedSuccessfulSplit split) { + this.primaryFailedSuccessfulSplit = split; + this.primarySuccessfulResults = + new boolean[this.primaryFailedSuccessfulSplit.successfulResults.length]; + for (int i = 0; i < this.primaryFailedSuccessfulSplit.successfulResults.length; i++) { + this.primarySuccessfulResults[i] = this.primaryFailedSuccessfulSplit.successfulResults[i]; + } + } + + @Override + public RequestResourcesDescription getRequestResourcesDescription( + List operationsToPerformOnSecondary) { + return new RequestResourcesDescription(primarySuccessfulResults); + } + + @Override + public FutureCallback getVerificationCallback(Object[] secondaryResults) { + return new FutureCallback() { + @Override + public void onSuccess(@NullableDecl Void unused) { + boolean[] booleanSecondaryResults = new boolean[secondaryResults.length]; + for (int i = 0; i < secondaryResults.length; i++) { + booleanSecondaryResults[i] = (boolean) secondaryResults[i]; + } + + verificationContinuationFactory + .getMismatchDetector() + .existsAll( + primaryFailedSuccessfulSplit.successfulOperations, + primarySuccessfulResults, + booleanSecondaryResults); + } + + @Override + public void onFailure(Throwable error) { + verificationContinuationFactory + .getMismatchDetector() + .existsAll(primaryFailedSuccessfulSplit.successfulOperations, error); + } + }; + } + } + + @Override + public long getRpcTimeout(TimeUnit timeUnit) { + throw new UnsupportedOperationException(); + } + + @Override + public long getReadRpcTimeout(TimeUnit timeUnit) { + throw new UnsupportedOperationException(); + } + + @Override + public long getWriteRpcTimeout(TimeUnit timeUnit) { + throw new UnsupportedOperationException(); + } + + @Override + public long getOperationTimeout(TimeUnit timeUnit) { + throw new UnsupportedOperationException(); + } + + @Override + public long getScanTimeout(TimeUnit timeUnit) { + throw new UnsupportedOperationException(); + } + + @Override + public CompletableFuture coprocessorService( + Function function, ServiceCaller serviceCaller, byte[] bytes) { + throw new UnsupportedOperationException(); + } + + @Override + public CoprocessorServiceBuilder coprocessorService( + Function function, + ServiceCaller serviceCaller, + CoprocessorCallback coprocessorCallback) { + throw new UnsupportedOperationException(); + } } diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringConnection.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringConnection.java new file mode 100644 index 0000000000..481fe54466 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringConnection.java @@ -0,0 +1,99 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase2_x; + +import java.util.concurrent.ExecutorService; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.TableBuilder; +import org.apache.hadoop.hbase.security.User; + +public class MirroringConnection + extends com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection implements Connection { + public MirroringConnection(Configuration conf, boolean managed, ExecutorService pool, User user) + throws Throwable { + super(conf, managed, pool, user); + } + + public MirroringConnection(Configuration conf, ExecutorService pool, User user) throws Throwable { + this(conf, false, pool, user); + } + + @Override + public void clearRegionLocationCache() { + throw new UnsupportedOperationException("clearRegionLocationCache"); + } + + @Override + public TableBuilder getTableBuilder(TableName tableName, ExecutorService executorService) { + final TableBuilder primaryTableBuilder = + getPrimaryConnection().getTableBuilder(tableName, executorService); + final TableBuilder secondaryTableBuilder = + getSecondaryConnection().getTableBuilder(tableName, executorService); + return new TableBuilder() { + @Override + public TableBuilder setOperationTimeout(int timeout) { + primaryTableBuilder.setOperationTimeout(timeout); + secondaryTableBuilder.setOperationTimeout(timeout); + return this; + } + + @Override + public TableBuilder setRpcTimeout(int timeout) { + primaryTableBuilder.setRpcTimeout(timeout); + secondaryTableBuilder.setRpcTimeout(timeout); + return this; + } + + @Override + public TableBuilder setReadRpcTimeout(int timeout) { + primaryTableBuilder.setReadRpcTimeout(timeout); + secondaryTableBuilder.setReadRpcTimeout(timeout); + return this; + } + + @Override + public TableBuilder setWriteRpcTimeout(int timeout) { + primaryTableBuilder.setWriteRpcTimeout(timeout); + secondaryTableBuilder.setWriteRpcTimeout(timeout); + return this; + } + + @Override + public Table build() { + return new MirroringTable( + primaryTableBuilder.build(), + secondaryTableBuilder.build(), + executorService, + mismatchDetector, + flowController, + secondaryWriteErrorConsumer, + readSampler, + timestamper, + performWritesConcurrently, + waitForSecondaryWrites, + mirroringTracer, + referenceCounter, + MirroringConnection.super + .configuration + .mirroringOptions + .resultScannerBufferedMismatchedResults); + } + }; + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringTable.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringTable.java new file mode 100644 index 0000000000..0d3dd5f344 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/MirroringTable.java @@ -0,0 +1,86 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase2_x; + +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import java.io.IOException; +import java.util.List; +import java.util.concurrent.ExecutorService; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.TableDescriptor; + +public class MirroringTable extends com.google.cloud.bigtable.mirroring.hbase1_x.MirroringTable + implements Table { + public MirroringTable( + Table primaryTable, + Table secondaryTable, + ExecutorService executorService, + MismatchDetector mismatchDetector, + FlowController flowController, + SecondaryWriteErrorConsumer secondaryWriteErrorConsumer, + ReadSampler readSampler, + Timestamper timestamper, + boolean performWritesConcurrently, + boolean waitForSecondaryWrites, + MirroringTracer mirroringTracer, + ReferenceCounter referenceCounter, + int resultScannerBufferedMismatchedResults) { + super( + primaryTable, + secondaryTable, + executorService, + mismatchDetector, + flowController, + secondaryWriteErrorConsumer, + readSampler, + timestamper, + performWritesConcurrently, + waitForSecondaryWrites, + mirroringTracer, + referenceCounter, + resultScannerBufferedMismatchedResults); + } + + @Override + public TableDescriptor getDescriptor() throws IOException { + return primaryTable.getDescriptor(); + } + + @Override + public boolean[] exists(List gets) throws IOException { + return existsAll(gets); + } + + /** + * HBase 1.x's {@link Table#append} returns {@code null} when {@link Append#isReturnResults} is + * {@code false} + */ + @Override + public Result append(Append append) throws IOException { + Result result = super.append(append); + return result == null ? Result.create(new Cell[0]) : result; + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/compat/CellComparatorCompatImpl.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/compat/CellComparatorCompatImpl.java new file mode 100644 index 0000000000..07d9a2a7b8 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/compat/CellComparatorCompatImpl.java @@ -0,0 +1,29 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase2_x.utils.compat; + +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.compat.CellComparatorCompat; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellComparator; + +public class CellComparatorCompatImpl implements CellComparatorCompat { + static CellComparator cellComparator = CellComparator.getInstance(); + + @Override + public int compareCells(Cell cell1, Cell cell2) { + return cellComparator.compare(cell1, cell2, true); + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/futures/FutureConverter.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/futures/FutureConverter.java index de25f61c69..aa6d3a72a5 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/futures/FutureConverter.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/main/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/futures/FutureConverter.java @@ -16,7 +16,6 @@ package com.google.cloud.bigtable.mirroring.hbase2_x.utils.futures; -// TODO(aczajkowski): remove those temporary dependencies (also from pom.xml) import static net.javacrumbs.futureconverter.java8guava.FutureConverter.toCompletableFuture; import static net.javacrumbs.futureconverter.java8guava.FutureConverter.toListenableFuture; diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncBufferedMutator.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncBufferedMutator.java new file mode 100644 index 0000000000..5d532df6ef --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncBufferedMutator.java @@ -0,0 +1,212 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase2_x; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerMock; +import static com.google.common.truth.Truth.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.RequestResourcesDescription; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.cloud.bigtable.mirroring.hbase2_x.utils.futures.FutureConverter; +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import org.apache.hadoop.hbase.client.AsyncBufferedMutator; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.util.Bytes; +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.mockito.Mock; +import org.mockito.junit.MockitoJUnit; +import org.mockito.junit.MockitoRule; + +@RunWith(JUnit4.class) +public class TestMirroringAsyncBufferedMutator { + @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); + + @Mock AsyncBufferedMutator primaryMutator; + @Mock AsyncBufferedMutator secondaryMutator; + @Mock FlowController flowController; + @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; + Timestamper timestamper = new NoopTimestamper(); + + CompletableFuture primaryFuture; + CompletableFuture secondaryCalled; + Put put; + + MirroringAsyncBufferedMutator mirroringMutator; + + @Before + public void setUp() { + setupFlowControllerMock(flowController); + this.mirroringMutator = + spy( + new MirroringAsyncBufferedMutator( + primaryMutator, + secondaryMutator, + mock(MirroringAsyncConfiguration.class), + flowController, + secondaryWriteErrorConsumer, + timestamper)); + + this.put = new Put(Bytes.toBytes("rowKey")); + put.addColumn(Bytes.toBytes("cf1"), Bytes.toBytes("c1"), Bytes.toBytes("value")); + + this.primaryFuture = new CompletableFuture<>(); + this.secondaryCalled = new CompletableFuture<>(); + when(primaryMutator.mutate(put)).thenReturn(primaryFuture); + } + + @Test + public void testResultIsCompletedOnPrimaryCompletion() + throws ExecutionException, InterruptedException { + + when(secondaryMutator.mutate(put)) + .thenAnswer( + invocationOnMock -> { + secondaryCalled.complete(null); + return new CompletableFuture<>(); + }); + + CompletableFuture resourcesAllocated = + new CompletableFuture<>(); + when(flowController.asyncRequestResource(any(RequestResourcesDescription.class))) + .thenReturn(FutureConverter.toListenable(resourcesAllocated)); + + CompletableFuture resultFuture = mirroringMutator.mutate(put); + + // waiting for primary + verify(primaryMutator, times(1)).mutate(put); + verify(flowController, times(0)).asyncRequestResource(any(RequestResourcesDescription.class)); + assertThat(resultFuture.isDone()).isFalse(); + + // primary complete but still waiting for resources so not done + primaryFuture.complete(null); + assertThat(resultFuture.isDone()).isFalse(); + + // got resources so we got the result + resourcesAllocated.complete(null); + resultFuture.get(); + + // if we got the resources then the secondary should be scheduled + secondaryCalled.get(); + verify(flowController, times(1)).asyncRequestResource(any(RequestResourcesDescription.class)); + verify(secondaryMutator, times(1)).mutate(put); + } + + @Test + public void testPrimaryFailed() { + CompletableFuture primaryFailure = new CompletableFuture<>(); + + when(primaryMutator.mutate(put)).thenReturn(primaryFailure); + + CompletableFuture resultFuture = mirroringMutator.mutate(put); + primaryFailure.completeExceptionally(new IOException()); + + verify(primaryMutator, times(1)).mutate(put); + verify(flowController, times(0)).asyncRequestResource(any(RequestResourcesDescription.class)); + verify(secondaryMutator, times(0)).mutate(put); + try { + resultFuture.get(); + } catch (InterruptedException | ExecutionException ignored) { + } + assertThat(resultFuture.isCompletedExceptionally()).isTrue(); + assertThat(secondaryCalled.isDone()).isFalse(); + } + + @Test + public void testRequestResourceFailed() { + CompletableFuture resourcesAllocated = + new CompletableFuture<>(); + when(flowController.asyncRequestResource(any(RequestResourcesDescription.class))) + .thenReturn(FutureConverter.toListenable(resourcesAllocated)); + + CompletableFuture resultFuture = mirroringMutator.mutate(put); + + // waiting for primary + verify(primaryMutator, times(1)).mutate(put); + verify(flowController, times(0)).asyncRequestResource(any(RequestResourcesDescription.class)); + assertThat(resultFuture.isDone()).isFalse(); + + // primary complete but still waiting for resources so not done + primaryFuture.complete(null); + + assertThat(secondaryCalled.isDone()).isFalse(); + resourcesAllocated.completeExceptionally(new IOException()); + try { + resultFuture.get(); + } catch (InterruptedException | ExecutionException ignored) { + } + assertThat(resultFuture.isCompletedExceptionally()).isFalse(); + + verify(flowController, times(1)).asyncRequestResource(any(RequestResourcesDescription.class)); + } + + @Test + public void testSecondaryFailed() throws ExecutionException, InterruptedException { + CompletableFuture secondaryFailure = new CompletableFuture<>(); + when(secondaryMutator.mutate(put)) + .thenAnswer( + invocationOnMock -> { + secondaryCalled.complete(null); + return secondaryFailure; + }); + + CompletableFuture resourcesAllocated = + new CompletableFuture<>(); + when(flowController.asyncRequestResource(any(RequestResourcesDescription.class))) + .thenReturn(FutureConverter.toListenable(resourcesAllocated)); + + CompletableFuture resultFuture = mirroringMutator.mutate(put); + + // waiting for primary + verify(primaryMutator, times(1)).mutate(put); + verify(flowController, times(0)).asyncRequestResource(any(RequestResourcesDescription.class)); + assertThat(resultFuture.isDone()).isFalse(); + + // primary complete but still waiting for resources so not done + primaryFuture.complete(null); + assertThat(resultFuture.isDone()).isFalse(); + IOException expectedException = new IOException("expected"); + secondaryFailure.completeExceptionally(expectedException); + + // got resources so we got the result + resourcesAllocated.complete(null); + resultFuture.get(); + + secondaryCalled.get(); + + verify(flowController, times(1)).asyncRequestResource(any(RequestResourcesDescription.class)); + verify(secondaryMutator, times(1)).mutate(put); + + assertThat(resultFuture.isCompletedExceptionally()).isFalse(); + verify(secondaryWriteErrorConsumer) + .consume(HBaseOperation.BUFFERED_MUTATOR_MUTATE, put, expectedException); + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncConfiguration.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncConfiguration.java index 5d5ed3b68c..23489886a4 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncConfiguration.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncConfiguration.java @@ -17,9 +17,8 @@ import static com.google.common.truth.Truth.assertThat; import static org.junit.Assert.assertThrows; +import static org.junit.Assert.fail; -import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConfiguration; -import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.MirroringConfigurationHelper; import org.apache.hadoop.conf.Configuration; import org.junit.Test; @@ -33,62 +32,86 @@ private Exception assertInvalidConfiguration(final Configuration test) { }); } + private void assertValidConfiguration(final Configuration test) { + try { + new MirroringAsyncConfiguration(test); + } catch (Exception e) { + fail("Shouldn't have thrown"); + } + } + @Test public void testRequiresConfiguringImplClasses() { Configuration testConfiguration = new Configuration(false); - testConfiguration.set(MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, "1"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "2"); + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, "ConnectionClass1"); + testConfiguration.set( + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "ConnectionClass2"); + + // MirroringAsyncConfiguration requires that keys for both synchronous and asynchronous classes + // of both primary and secondary connections are filled. + // None of asynchronous connection class keys is set. Exception exc = assertInvalidConfiguration(testConfiguration); assertThat(exc) .hasMessageThat() .contains("Specify google.bigtable.mirroring.primary-client.async.connection.impl"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, "3"); + MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, + "AsyncConnectionClass1"); + // Secondary asynchronous connection class key is not set. exc = assertInvalidConfiguration(testConfiguration); assertThat(exc) .hasMessageThat() .contains("Specify google.bigtable.mirroring.secondary-client.async.connection.impl"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, "4"); - MirroringAsyncConfiguration configuration = new MirroringAsyncConfiguration(testConfiguration); + MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, + "AsyncConnectionClass2"); + // All required keys are set. + assertValidConfiguration(testConfiguration); } @Test public void testFillsAllClassNames() { Configuration testConfiguration = new Configuration(false); - testConfiguration.set(MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, "1"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "2"); + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, "ConnectionClass1"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, "3"); + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "ConnectionClass2"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, "4"); + MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, + "AsyncConnectionClass1"); + testConfiguration.set( + MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, + "AsyncConnectionClass2"); - MirroringAsyncConfiguration configuration = new MirroringAsyncConfiguration(testConfiguration); - assertThat(configuration.primaryConfiguration.get("hbase.client.connection.impl")) - .isEqualTo("1"); - assertThat(configuration.secondaryConfiguration.get("hbase.client.connection.impl")) - .isEqualTo("2"); - assertThat(configuration.primaryConfiguration.get("hbase.client.async.connection.impl")) - .isEqualTo("3"); - assertThat(configuration.secondaryConfiguration.get("hbase.client.async.connection.impl")) - .isEqualTo("4"); + MirroringAsyncConfiguration asyncConfiguration = + new MirroringAsyncConfiguration(testConfiguration); + assertThat(asyncConfiguration.primaryConfiguration.get("hbase.client.connection.impl")) + .isEqualTo("ConnectionClass1"); + assertThat(asyncConfiguration.secondaryConfiguration.get("hbase.client.connection.impl")) + .isEqualTo("ConnectionClass2"); + assertThat(asyncConfiguration.primaryConfiguration.get("hbase.client.async.connection.impl")) + .isEqualTo("AsyncConnectionClass1"); + assertThat(asyncConfiguration.secondaryConfiguration.get("hbase.client.async.connection.impl")) + .isEqualTo("AsyncConnectionClass2"); } @Test public void testSameConnectionClassesRequireOneOfPrefixes() { Configuration testConfiguration = new Configuration(false); - testConfiguration.set(MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, "1"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "2"); + MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, "ConnectionClass1"); + testConfiguration.set( + MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "ConnectionClass2"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, "3"); + MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, + "SameAsyncConnectionClass"); testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, "3"); + MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, + "SameAsyncConnectionClass"); Exception exc = assertInvalidConfiguration(testConfiguration); assertThat(exc) @@ -98,49 +121,6 @@ public void testSameConnectionClassesRequireOneOfPrefixes() { testConfiguration.set( MirroringConfigurationHelper.MIRRORING_SECONDARY_CONFIG_PREFIX_KEY, "prefix"); - MirroringAsyncConfiguration config = new MirroringAsyncConfiguration(testConfiguration); - } - - @Test - public void testCopyConstructorSetsImplClasses() { - Configuration empty = new Configuration(false); - MirroringAsyncConfiguration emptyMirroringConfiguration = - new MirroringAsyncConfiguration(empty, empty, empty); - MirroringAsyncConfiguration configuration = - new MirroringAsyncConfiguration(emptyMirroringConfiguration); - assertThat(configuration.get("hbase.client.connection.impl")) - .isEqualTo(MirroringConnection.class.getCanonicalName()); - assertThat(configuration.get("hbase.client.async.connection.impl")) - .isEqualTo(MirroringAsyncConnection.class.getCanonicalName()); - } - - @Test - public void testManualConstructionIsntBackwardsCompatible() { - Configuration empty = new Configuration(false); - MirroringAsyncConfiguration emptyMirroringConfiguration = - new MirroringAsyncConfiguration(empty, empty, empty); - MirroringAsyncConfiguration configuration = - new MirroringAsyncConfiguration(emptyMirroringConfiguration); - assertThrows( - IllegalArgumentException.class, - () -> { - new MirroringConfiguration(configuration); - }); - } - - @Test - public void testConfigurationConstructorIsBackwardsCompatible() { - Configuration testConfiguration = new Configuration(false); - testConfiguration.set(MirroringConfigurationHelper.MIRRORING_PRIMARY_CONNECTION_CLASS_KEY, "1"); - testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_CONNECTION_CLASS_KEY, "2"); - testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_PRIMARY_ASYNC_CONNECTION_CLASS_KEY, "3"); - testConfiguration.set( - MirroringConfigurationHelper.MIRRORING_SECONDARY_ASYNC_CONNECTION_CLASS_KEY, "4"); - MirroringAsyncConfiguration mirroringAsyncConfiguration = - new MirroringAsyncConfiguration(testConfiguration); - - new MirroringConfiguration(mirroringAsyncConfiguration); + assertValidConfiguration(testConfiguration); } } diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTable.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTable.java index d84220ec40..b31162768d 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTable.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTable.java @@ -23,24 +23,31 @@ import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerToRejectRequests; import static com.google.common.truth.Truth.assertThat; import static junit.framework.TestCase.fail; +import static org.junit.Assert.assertThrows; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyList; import static org.mockito.ArgumentMatchers.eq; import static org.mockito.Mockito.lenient; +import static org.mockito.Mockito.mock; import static org.mockito.Mockito.never; import static org.mockito.Mockito.spy; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; +import com.google.cloud.bigtable.mirroring.hbase1_x.MirroringResultScanner; import com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.OperationUtils; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringSpanConstants.HBaseOperation; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import com.google.common.collect.ImmutableList; import com.google.common.primitives.Longs; import java.io.IOException; import java.util.ArrayList; @@ -50,10 +57,15 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionException; import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.stream.Collectors; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellBuilderFactory; import org.apache.hadoop.hbase.CellBuilderType; import org.apache.hadoop.hbase.CellComparator; +import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.client.AdvancedScanResultConsumer; import org.apache.hadoop.hbase.client.Append; import org.apache.hadoop.hbase.client.AsyncTable; import org.apache.hadoop.hbase.client.Delete; @@ -63,8 +75,11 @@ import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Row; import org.apache.hadoop.hbase.client.RowMutations; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.ScanResultConsumer; import org.apache.hadoop.hbase.client.ScanResultConsumerBase; import org.apache.hadoop.hbase.io.TimeRange; import org.junit.Before; @@ -88,6 +103,8 @@ public class TestMirroringAsyncTable { @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; @Mock ListenableReferenceCounter referenceCounter; @Mock AsyncTable.CheckAndMutateBuilder primaryBuilder; + @Mock ExecutorService executorService; + Timestamper timestamper = new NoopTimestamper(); MirroringAsyncTable mirroringTable; @@ -103,7 +120,11 @@ public void setUp() { flowController, secondaryWriteErrorConsumer, new MirroringTracer(), - referenceCounter)); + new ReadSampler(100), + timestamper, + referenceCounter, + executorService, + 10)); lenient() .doReturn(primaryBuilder) @@ -137,6 +158,22 @@ public void testMismatchDetectorIsCalledOnGetSingle() verify(mismatchDetector, never()).get(anyList(), any(Result[].class), any(Result[].class)); } + @Test + public void testPrimaryReadExceptionDoesntCallSecondaryNorVerification() throws IOException { + Get request = createGet("test"); + IOException expectedException = new IOException("expected"); + CompletableFuture primaryFuture = new CompletableFuture<>(); + primaryFuture.completeExceptionally(expectedException); + when(primaryTable.get(request)).thenReturn(primaryFuture); + + Exception thrownException = + assertThrows(ExecutionException.class, () -> mirroringTable.get(request).get()); + assertThat(thrownException.getCause()).isEqualTo(expectedException); + + verify(secondaryTable, never()).get(any(Get.class)); + verify(mismatchDetector, never()).get(request, expectedException); + } + @Test public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnSingleGet() throws ExecutionException, InterruptedException { @@ -164,44 +201,20 @@ public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnSingleGet() } @Test - public void testMismatchDetectorIsCalledOnExists() - throws ExecutionException, InterruptedException { - Get get = createGet("test"); - final boolean expectedResult = true; - CompletableFuture primaryFuture = new CompletableFuture<>(); - CompletableFuture secondaryFuture = new CompletableFuture<>(); - when(primaryTable.exists(get)).thenReturn(primaryFuture); - when(secondaryTable.exists(get)).thenReturn(secondaryFuture); - - CompletableFuture resultFuture = mirroringTable.exists(get); - primaryFuture.complete(expectedResult); - secondaryFuture.complete(expectedResult); - Boolean result = resultFuture.get(); - - assertThat(result).isEqualTo(expectedResult); - - verify(mismatchDetector, times(1)).exists(get, expectedResult, expectedResult); - verify(mismatchDetector, never()).exists(any(Get.class), any()); - } - - @Test - public void testMismatchDetectorIsCalledOnGetMultiple() - throws ExecutionException, InterruptedException { + public void testMismatchDetectorIsCalledOnGetMultiple() { List get = createGets("test"); Result[] expectedResultArray = {createResult("test", "value")}; CompletableFuture expectedFuture = new CompletableFuture<>(); List> expectedResultFutureList = Collections.singletonList(expectedFuture); - when(primaryTable.batch(get)).thenReturn(expectedResultFutureList); - when(secondaryTable.batch(get)).thenReturn(expectedResultFutureList); + when(primaryTable.get(get)).thenReturn(expectedResultFutureList); + when(secondaryTable.get(get)).thenReturn(expectedResultFutureList); List> resultFutures = mirroringTable.get(get); - assertThat(resultFutures.size()).isEqualTo(1); - expectedFuture.complete(expectedResultArray[0]); - Result result = resultFutures.get(0).get(); - assertThat(result).isEqualTo(expectedResultArray[0]); + List results = waitForAll(resultFutures); + assertThat(results).isEqualTo(Arrays.asList(expectedResultArray)); verify(mismatchDetector, times(1)) .batch(eq(get), eq(expectedResultArray), eq(expectedResultArray)); @@ -211,34 +224,27 @@ public void testMismatchDetectorIsCalledOnGetMultiple() } @Test - public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnGetMultiple() - throws ExecutionException, InterruptedException { + public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnGetMultiple() { List get = createGets("test1", "test2"); - Result[] expectedResultArray = { - createResult("test1", "value1"), createResult("test2", "value2") - }; - CompletableFuture expectedFuture1 = new CompletableFuture<>(); - CompletableFuture expectedFuture2 = new CompletableFuture<>(); + List expectedResultList = + Arrays.asList(createResult("test1", "value1"), createResult("test2", "value2")); + IOException ioe = new IOException("expected"); CompletableFuture exceptionalFuture = new CompletableFuture<>(); + exceptionalFuture.completeExceptionally(ioe); + List> expectedResultFutureList = - Arrays.asList(expectedFuture1, expectedFuture2); + expectedResultList.stream() + .map(CompletableFuture::completedFuture) + .collect(Collectors.toList()); List> exceptionalResultFutureList = Arrays.asList(exceptionalFuture, exceptionalFuture); - when(primaryTable.batch(get)).thenReturn(expectedResultFutureList); - when(secondaryTable.batch(get)).thenReturn(exceptionalResultFutureList); + when(primaryTable.get(get)).thenReturn(expectedResultFutureList); + when(secondaryTable.get(get)).thenReturn(exceptionalResultFutureList); List> resultFutures = mirroringTable.get(get); - assertThat(resultFutures.size()).isEqualTo(2); - - expectedFuture1.complete(expectedResultArray[0]); - expectedFuture2.complete(expectedResultArray[1]); - IOException ioe = new IOException("expected"); - exceptionalFuture.completeExceptionally(ioe); - Result result1 = resultFutures.get(0).get(); - assertThat(result1).isEqualTo(expectedResultArray[0]); - Result result2 = resultFutures.get(1).get(); - assertThat(result2).isEqualTo(expectedResultArray[1]); + List results = waitForAll(resultFutures); + assertThat(results).isEqualTo(expectedResultList); ArgumentCaptor argument = ArgumentCaptor.forClass(CompletionException.class); @@ -251,7 +257,28 @@ public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnGetMultiple } @Test - public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnExists() + public void testMismatchDetectorIsCalledOnExistsSingle() + throws ExecutionException, InterruptedException { + Get get = createGet("test"); + final boolean expectedResult = true; + CompletableFuture primaryFuture = new CompletableFuture<>(); + CompletableFuture secondaryFuture = new CompletableFuture<>(); + when(primaryTable.exists(get)).thenReturn(primaryFuture); + when(secondaryTable.exists(get)).thenReturn(secondaryFuture); + + CompletableFuture resultFuture = mirroringTable.exists(get); + primaryFuture.complete(expectedResult); + secondaryFuture.complete(expectedResult); + Boolean result = resultFuture.get(); + + assertThat(result).isEqualTo(expectedResult); + + verify(mismatchDetector, times(1)).exists(get, expectedResult, expectedResult); + verify(mismatchDetector, never()).exists(any(Get.class), any()); + } + + @Test + public void testSecondaryExceptionCallsVerificationErrorHandlerOnExists() throws ExecutionException, InterruptedException { Get get = createGet("test"); final boolean expectedResult = true; @@ -272,6 +299,67 @@ public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnExists() verify(mismatchDetector, times(1)).exists(get, expectedException); } + @Test + public void testMismatchDetectorIsCalledOnExistsMultiple() + throws ExecutionException, InterruptedException { + List get = createGets("test"); + boolean[] expectedResultArray = {false}; + CompletableFuture expectedFuture = new CompletableFuture<>(); + List> expectedResultFutureList = + Collections.singletonList(expectedFuture); + + when(primaryTable.exists(get)).thenReturn(expectedResultFutureList); + when(secondaryTable.exists(get)).thenReturn(expectedResultFutureList); + + List> resultFutures = mirroringTable.exists(get); + assertThat(resultFutures.size()).isEqualTo(1); + + expectedFuture.complete(expectedResultArray[0]); + Boolean result = resultFutures.get(0).get(); + assertThat(result).isEqualTo(expectedResultArray[0]); + + verify(mismatchDetector, times(1)) + .existsAll(eq(get), eq(expectedResultArray), eq(expectedResultArray)); + verify(mismatchDetector, never()).batch(anyList(), any()); + verify(mismatchDetector, never()).get(any(Get.class), any()); + verify(mismatchDetector, never()).get(any(Get.class), any(), any()); + } + + @Test + public void testSecondaryReadExceptionCallsVerificationErrorHandlerOnExistsMultiple() + throws ExecutionException, InterruptedException { + List gets = createGets("test1", "test2"); + boolean[] expectedResultArray = {true, false}; + CompletableFuture expectedFuture1 = new CompletableFuture<>(); + CompletableFuture expectedFuture2 = new CompletableFuture<>(); + CompletableFuture exceptionalFuture = new CompletableFuture<>(); + List> expectedResultFutureList = + Arrays.asList(expectedFuture1, expectedFuture2); + List> exceptionalResultFutureList = + Arrays.asList(exceptionalFuture, exceptionalFuture); + + when(primaryTable.exists(gets)).thenReturn(expectedResultFutureList); + when(secondaryTable.exists(gets)).thenReturn(exceptionalResultFutureList); + + List> resultFutures = mirroringTable.exists(gets); + assertThat(resultFutures.size()).isEqualTo(2); + + expectedFuture1.complete(expectedResultArray[0]); + expectedFuture2.complete(expectedResultArray[1]); + IOException ioe = new IOException("expected"); + exceptionalFuture.completeExceptionally(ioe); + Boolean result1 = resultFutures.get(0).get(); + assertThat(result1).isEqualTo(expectedResultArray[0]); + Boolean result2 = resultFutures.get(1).get(); + assertThat(result2).isEqualTo(expectedResultArray[1]); + + verify(mismatchDetector, times(1)).existsAll(eq(gets), any(Throwable.class)); + + verify(mismatchDetector, never()).batch(anyList(), any(), any()); + verify(mismatchDetector, never()).get(any(Get.class), any()); + verify(mismatchDetector, never()).get(any(Get.class), any(), any()); + } + @Test public void testPutIsMirrored() throws InterruptedException, ExecutionException { Put put = createPut("test", "f1", "q1", "v1"); @@ -293,6 +381,21 @@ public void testPutIsMirrored() throws InterruptedException, ExecutionException verify(secondaryTable, times(1)).put(put); } + @Test + public void testPutListIsMirrored() throws ExecutionException, InterruptedException { + Put put = createPut("test", "f1", "q1", "v1"); + List puts = Arrays.asList(put); + + when(primaryTable.put(puts)) + .thenReturn( + Arrays.asList( + CompletableFuture.completedFuture(null), CompletableFuture.completedFuture(null))); + CompletableFuture.allOf(mirroringTable.put(puts).toArray(new CompletableFuture[0])).get(); + + verify(primaryTable, times(1)).put(eq(puts)); + verify(secondaryTable, times(1)).put(eq(puts)); + } + @Test public void testPutWithErrorIsNotMirrored() { final Put put = createPut("test", "f1", "q1", "v1"); @@ -341,7 +444,6 @@ List waitForAll(List> futures) { try { results.add(future.get()); } catch (Exception e) { - results.add(null); } } return results; @@ -405,9 +507,9 @@ public void testBatchGetAndPutGetsAreVerifiedOnSuccess() { .consume(eq(HBaseOperation.BATCH), any(Mutation.class), any(Throwable.class)); } + @Test public void testBatchAllPrimaryFailed() throws IOException, InterruptedException, ExecutionException { - setupFlowControllerMock(flowController); Put put1 = createPut("test1", "f1", "q1", "v1"); Get get1 = createGet("get1"); @@ -426,6 +528,7 @@ public void testBatchAllPrimaryFailed() verify(referenceCounter, never()).incrementReferenceCount(); List> resultFutures = mirroringTable.batch(requests); + assertThat(resultFutures.size()).isEqualTo(2); verify(referenceCounter, times(1)).incrementReferenceCount(); IOException ioe = new IOException("expected"); @@ -435,7 +538,6 @@ public void testBatchAllPrimaryFailed() verify(referenceCounter, times(1)).decrementReferenceCount(); List results = waitForAll(resultFutures); - assertThat(results.size()).isEqualTo(2); verify(primaryTable, times(1)).batch(requests); verify(secondaryTable, never()).batch(anyList()); @@ -446,7 +548,7 @@ public void testBatchAllPrimaryFailed() } @Test - public void testBatchGetAndPut() { + public void testBatchGetAndPut() throws ExecutionException, InterruptedException { Put put1 = createPut("test1", "f1", "q1", "v1"); Put put2 = createPut("test2", "f2", "q2", "v2"); Put put3 = createPut("test3", "f3", "q3", "v3"); @@ -506,16 +608,15 @@ public void testBatchGetAndPut() { secondaryFutures.get(3).complete(get3Result); // get3 - ok verify(referenceCounter, times(1)).decrementReferenceCount(); - List results = waitForAll(resultFutures); - assertThat(results.size()).isEqualTo(primaryFutures.size()); + waitForAll(resultFutures); - assertThat(results.get(0)).isEqualTo(null); // put1 + assertThat(resultFutures.get(0).get()).isEqualTo(null); // put1 assertThat(resultFutures.get(1).isCompletedExceptionally()); // put2 - assertThat(results.get(2)).isEqualTo(null); // put3 + assertThat(resultFutures.get(2).get()).isEqualTo(null); // put3 - assertThat(results.get(3)).isEqualTo(get1Result); + assertThat(resultFutures.get(3).get()).isEqualTo(get1Result); assertThat(resultFutures.get(4).isCompletedExceptionally()); - assertThat(results.get(5)).isEqualTo(get3Result); + assertThat(resultFutures.get(5).get()).isEqualTo(get3Result); verify(primaryTable, times(1)).batch(requests); verify(secondaryTable, times(1)).batch(eq(secondaryRequests)); @@ -530,7 +631,8 @@ public void testBatchGetAndPut() { } @Test - public void testBatchGetsPrimaryFailsSecondaryOk() { + public void testBatchGetsPrimaryFailsSecondaryOk() + throws ExecutionException, InterruptedException { Get get1 = createGet("get1"); Get get2 = createGet("get2"); @@ -552,17 +654,18 @@ public void testBatchGetsPrimaryFailsSecondaryOk() { when(secondaryTable.batch(secondaryRequests)).thenReturn(secondaryFutures); List> resultFutures = mirroringTable.batch(requests); + assertThat(resultFutures.size()).isEqualTo(primaryFutures.size()); + IOException ioe = new IOException("expected"); primaryFutures.get(0).completeExceptionally(ioe); // get1 - failed primaryFutures.get(1).complete(get2Result); // get2 - ok secondaryFutures.get(0).complete(get2Result); // get2 - ok - List results = waitForAll(resultFutures); - assertThat(results.size()).isEqualTo(primaryFutures.size()); + waitForAll(resultFutures); - assertThat(resultFutures.get(0).isCompletedExceptionally()); // get1 - assertThat(results.get(1)).isEqualTo(get2Result); // put3 + assertThat(resultFutures.get(0).isCompletedExceptionally()); // get1 - failed + assertThat(resultFutures.get(1).get()).isEqualTo(get2Result); // get2 - ok verify(primaryTable, times(1)).batch(requests); verify(secondaryTable, times(1)).batch(eq(secondaryRequests)); @@ -643,6 +746,39 @@ public void testConditionalWriteWhenPrimaryErred() verify(secondaryTable, never()).put(any(Put.class)); } + @Test + public void testConditionalWriteHappensWhenSecondaryErred() + throws ExecutionException, InterruptedException, IOException { + byte[] row = "r1".getBytes(); + Put put = new Put(row); + RowMutations mutations = new RowMutations(row); + mutations.add(put); + CompletableFuture primaryFuture = new CompletableFuture<>(); + when(primaryBuilder.thenMutate(mutations)).thenReturn(primaryFuture); + + IOException ioe = new IOException("expected"); + CompletableFuture exceptionalFuture = new CompletableFuture<>(); + exceptionalFuture.completeExceptionally(ioe); + when(secondaryTable.mutateRow(mutations)).thenReturn(exceptionalFuture); + + verify(referenceCounter, never()).incrementReferenceCount(); + verify(referenceCounter, never()).decrementReferenceCount(); + CompletableFuture resultFuture = + mirroringTable.checkAndMutate("r1".getBytes(), "f1".getBytes()).thenMutate(mutations); + + verify(referenceCounter, times(1)).incrementReferenceCount(); + primaryFuture.complete(true); + // The reference count is incremented once at the beginning of checkAndMutate() and then for the + // second time in writeWithControlFlow(). + // It's done this way so that the reference counting invariant isn't violated when refactoring + // brittle code around forwarding result of writeWithFlowControl(). + resultFuture.get(); + verify(secondaryTable, times(1)).mutateRow(mutations); + + verify(referenceCounter, times(2)).incrementReferenceCount(); + verify(referenceCounter, times(2)).decrementReferenceCount(); + } + @Test public void testCheckAndMutateBuilderChainingWhenInPlace() { byte[] qual = "q1".getBytes(); @@ -739,33 +875,67 @@ public void testIncrement() throws ExecutionException, InterruptedException { }); Put expectedPut = OperationUtils.makePutFromResult(incrementResult); - // increment() and append() modify the reference counter twice to make logic less brittle when(primaryTable.increment(any(Increment.class))) .thenReturn(CompletableFuture.completedFuture(incrementResult)); + verify(referenceCounter, never()).decrementReferenceCount(); verify(referenceCounter, never()).incrementReferenceCount(); mirroringTable.increment(increment).get(); + // increment() and append() modify the reference counter twice to make logic less brittle: + // We increment and decrement reference counters around both mutationAsPut() and + // writeWithFlowControl() - it simplifies the implementation. + // It also causes no harm because - as this test shows - the reference counters are + // incremented and decremented the same number of times. verify(referenceCounter, times(2)).decrementReferenceCount(); verify(referenceCounter, times(2)).incrementReferenceCount(); - mirroringTable - .incrementColumnValue("r1".getBytes(), "f1".getBytes(), "q1".getBytes(), 3L) - .get(); - verify(referenceCounter, times(4)).decrementReferenceCount(); - verify(referenceCounter, times(4)).incrementReferenceCount(); + + verify(primaryTable, times(1)).increment(increment); + verify(secondaryTable, never()).increment(any(Increment.class)); + ArgumentCaptor putCaptor = ArgumentCaptor.forClass(Put.class); + verify(secondaryTable, times(1)).put(putCaptor.capture()); + assertPutsAreEqual(putCaptor.getValue(), expectedPut); + } + + @Test + public void testIncrementColumnValue() throws ExecutionException, InterruptedException { + Increment increment = new Increment("r1".getBytes()); + Result incrementResult = + Result.create( + new Cell[] { + CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY) + .setRow("r1".getBytes()) + .setFamily("f1".getBytes()) + .setQualifier("q1".getBytes()) + .setTimestamp(12) + .setType(Cell.Type.Put) + .setValue(Longs.toByteArray(142)) + .build() + }); + Put expectedPut = OperationUtils.makePutFromResult(incrementResult); + + when(primaryTable.increment(any(Increment.class))) + .thenReturn(CompletableFuture.completedFuture(incrementResult)); + + verify(referenceCounter, never()).decrementReferenceCount(); + verify(referenceCounter, never()).incrementReferenceCount(); + // We're testing that it's equivalent to plain increment(). mirroringTable .incrementColumnValue( "r1".getBytes(), "f1".getBytes(), "q1".getBytes(), 3L, Durability.SYNC_WAL) .get(); - verify(referenceCounter, times(6)).decrementReferenceCount(); - verify(referenceCounter, times(6)).incrementReferenceCount(); + // increment() and append() modify the reference counter twice to make logic less brittle: + // We increment and decrement reference counters around both mutationAsPut() and + // writeWithFlowControl() - it simplifies the implementation. + // It also causes no harm because - as this test shows - the reference counters are + // incremented and decremented the same number of times. + verify(referenceCounter, times(2)).decrementReferenceCount(); + verify(referenceCounter, times(2)).incrementReferenceCount(); - ArgumentCaptor argument = ArgumentCaptor.forClass(Put.class); + verify(primaryTable, times(1)).increment(increment); verify(secondaryTable, never()).increment(any(Increment.class)); - verify(secondaryTable, times(3)).put(argument.capture()); - - assertPutsAreEqual(argument.getAllValues().get(0), expectedPut); - assertPutsAreEqual(argument.getAllValues().get(1), expectedPut); - assertPutsAreEqual(argument.getAllValues().get(2), expectedPut); + ArgumentCaptor putCaptor = ArgumentCaptor.forClass(Put.class); + verify(secondaryTable, times(1)).put(putCaptor.capture()); + assertPutsAreEqual(putCaptor.getValue(), expectedPut); } @Test @@ -788,10 +958,14 @@ public void testAppend() throws ExecutionException, InterruptedException { when(primaryTable.append(any(Append.class))) .thenReturn(CompletableFuture.completedFuture(appendResult)); - // increment() and append() modify the reference counter twice to make logic less brittle verify(referenceCounter, never()).decrementReferenceCount(); verify(referenceCounter, never()).incrementReferenceCount(); mirroringTable.append(append).get(); + // increment() and append() modify the reference counter twice to make logic less brittle: + // We increment and decrement reference counters around both mutationAsPut() and + // writeWithFlowControl() - it simplifies the implementation. + // It also causes no harm because - as this test shows - the reference counters are + // incremented and decremented the same number of times. verify(referenceCounter, times(2)).decrementReferenceCount(); verify(referenceCounter, times(2)).incrementReferenceCount(); @@ -802,6 +976,122 @@ public void testAppend() throws ExecutionException, InterruptedException { verify(secondaryTable, never()).append(any(Append.class)); } + @Test + public void testAppendWhichDoesntWantResult() throws InterruptedException, ExecutionException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + Append appendIgnoringResult = new Append(row).setReturnResults(false); + + when(primaryTable.append(any(Append.class))) + .thenReturn( + CompletableFuture.completedFuture( + Result.create( + new Cell[] { + CellUtil.createCell( + row, family, qualifier, ts, KeyValue.Type.Put.getCode(), value) + }))); + Result appendWithoutResult = mirroringTable.append(appendIgnoringResult).get(); + + ArgumentCaptor appendCaptor = ArgumentCaptor.forClass(Append.class); + verify(primaryTable, times(1)).append(appendCaptor.capture()); + assertThat(appendCaptor.getValue().isReturnResults()).isTrue(); + assertThat(appendWithoutResult.value()).isNull(); + } + + @Test + public void testIncrementWhichDoesntWantResult() throws InterruptedException, ExecutionException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + Increment incrementIgnoringResult = new Increment(row).setReturnResults(false); + + when(primaryTable.increment(any(Increment.class))) + .thenReturn( + CompletableFuture.completedFuture( + Result.create( + new Cell[] { + CellUtil.createCell( + row, family, qualifier, ts, KeyValue.Type.Put.getCode(), value) + }))); + Result incrementWithoutResult = mirroringTable.increment(incrementIgnoringResult).get(); + + ArgumentCaptor incrementCaptor = ArgumentCaptor.forClass(Increment.class); + verify(primaryTable, times(1)).increment(incrementCaptor.capture()); + assertThat(incrementCaptor.getValue().isReturnResults()).isTrue(); + assertThat(incrementWithoutResult.value()).isNull(); + } + + @Test + public void testBatchAppendWhichDoesntWantResult() + throws InterruptedException, ExecutionException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + List batchAppendIgnoringResult = + Collections.singletonList(new Append(row).setReturnResults(false)); + + when(primaryTable.batch(anyList())) + .thenReturn( + Collections.singletonList( + CompletableFuture.completedFuture( + Result.create( + new Cell[] { + CellUtil.createCell( + row, family, qualifier, ts, KeyValue.Type.Put.getCode(), value) + })))); + + List> batchAppendWithoutResult = + mirroringTable.batch(batchAppendIgnoringResult); + + ArgumentCaptor> listCaptor = ArgumentCaptor.forClass(List.class); + verify(primaryTable, times(1)).batch(listCaptor.capture()); + assertThat(listCaptor.getValue().size()).isEqualTo(1); + assertThat(((Append) listCaptor.getValue().get(0)).isReturnResults()).isTrue(); + assertThat((batchAppendWithoutResult.get(0).get()).value()).isNull(); + } + + @Test + public void testBatchIncrementWhichDoesntWantResult() + throws InterruptedException, ExecutionException { + final byte[] row = "r1".getBytes(); + final byte[] family = "f1".getBytes(); + final byte[] qualifier = "q1".getBytes(); + final long ts = 12; + final byte[] value = "v1".getBytes(); + + List batchIncrementIgnoringResult = + Collections.singletonList(new Increment(row).setReturnResults(false)); + + when(primaryTable.batch(anyList())) + .thenReturn( + Collections.singletonList( + CompletableFuture.completedFuture( + Result.create( + new Cell[] { + CellUtil.createCell( + row, family, qualifier, ts, KeyValue.Type.Put.getCode(), value) + })))); + + List> batchIncrementWithoutResult = + mirroringTable.batch(batchIncrementIgnoringResult); + + ArgumentCaptor> listCaptor = ArgumentCaptor.forClass(List.class); + verify(primaryTable, times(1)).batch(listCaptor.capture()); + assertThat(listCaptor.getValue().size()).isEqualTo(1); + assertThat(((Increment) listCaptor.getValue().get(0)).isReturnResults()).isTrue(); + assertThat((batchIncrementWithoutResult.get(0).get()).value()).isNull(); + } + @Test public void testExceptionalFlowControllerAndWriteInBatch() throws ExecutionException, InterruptedException { @@ -828,4 +1118,140 @@ public void testExceptionalFlowControllerAndWriteInBatch() verify(secondaryWriteErrorConsumer, times(1)) .consume(eq(HBaseOperation.BATCH), eq(Arrays.asList(put2)), eq(flowControllerException)); } + + @Test + public void testFlowControllerExceptionInGetPreventsSecondaryOperation() + throws ExecutionException, InterruptedException { + setupFlowControllerToRejectRequests(flowController); + + Get request = createGet("test"); + Result expectedResult = createResult("test", "value"); + when(primaryTable.get(request)).thenReturn(CompletableFuture.completedFuture(expectedResult)); + Result result = mirroringTable.get(request).get(); + assertThat(result).isEqualTo(expectedResult); + + verify(primaryTable, times(1)).get(request); + verify(secondaryTable, never()).get(any(Get.class)); + } + + @Test + public void testFlowControllerExceptionInPutExecutesWriteErrorHandler() + throws ExecutionException, InterruptedException { + setupFlowControllerToRejectRequests(flowController); + + Put put = createPut("test", "f1", "q1", "v1"); + when(primaryTable.put(any(Put.class))).thenReturn(CompletableFuture.completedFuture(null)); + mirroringTable.put(put).get(); + + verify(primaryTable, times(1)).put(put); + verify(secondaryTable, never()).put(put); + verify(secondaryWriteErrorConsumer, times(1)) + .consume(eq(HBaseOperation.PUT), eq(ImmutableList.of(put)), any(Throwable.class)); + } + + @Test + public void testFlowControllerExceptionInBatchExecutesWriteErrorHandler() + throws ExecutionException, InterruptedException { + setupFlowControllerToRejectRequests(flowController); + + Put put1 = createPut("test0", "f1", "q1", "v1"); + Put put2 = createPut("test1", "f1", "q2", "v1"); + Get get1 = createGet("test2"); + List request = ImmutableList.of(put1, put2, get1); + + when(primaryTable.batch(request)) + .thenReturn( + Arrays.asList( + CompletableFuture.completedFuture(null), + CompletableFuture.completedFuture(null), + CompletableFuture.completedFuture(Result.create(new Cell[0])))); + CompletableFuture.allOf(mirroringTable.batch(request).toArray(new CompletableFuture[0])).get(); + + verify(primaryTable, times(1)).batch(eq(request)); + verify(secondaryTable, never()).batch(eq(request)); + verify(secondaryWriteErrorConsumer, times(1)) + .consume(eq(HBaseOperation.BATCH), eq(ImmutableList.of(put1, put2)), any(Throwable.class)); + } + + @Test + public void testBatchWithAppendsAndIncrements() { + Increment increment = new Increment("i".getBytes()); + increment.addColumn("f".getBytes(), "q".getBytes(), 1); + + Append append = new Append("a".getBytes()); + append.add("f".getBytes(), "q".getBytes(), "v".getBytes()); + + List operations = + Arrays.asList(increment, append, createPut("p", "f", "q", "v"), createGet("g")); + when(primaryTable.batch(operations)) + .thenReturn( + Arrays.asList( + CompletableFuture.completedFuture(createResult("i", "f", "q", 1, "1")), + CompletableFuture.completedFuture(createResult("a", "f", "q", 2, "2")), + CompletableFuture.completedFuture(new Result()), + CompletableFuture.completedFuture(createResult("g", "f", "q", 3, "3")))); + + List expectedSecondaryOperations = + Arrays.asList( + createPut("i", "f", "q", 1, "1"), + createPut("a", "f", "q", 2, "2"), + createPut("p", "f", "q", "v"), + createGet("g")); + + mirroringTable.batch(operations); + + verify(primaryTable, times(1)).batch(operations); + ArgumentCaptor> argumentCaptor = ArgumentCaptor.forClass(List.class); + verify(secondaryTable, times(1)).batch(argumentCaptor.capture()); + + assertPutsAreEqual( + (Put) argumentCaptor.getValue().get(0), (Put) expectedSecondaryOperations.get(0)); + assertPutsAreEqual( + (Put) argumentCaptor.getValue().get(1), (Put) expectedSecondaryOperations.get(1)); + assertPutsAreEqual( + (Put) argumentCaptor.getValue().get(2), (Put) expectedSecondaryOperations.get(2)); + assertThat(argumentCaptor.getValue().get(3)).isEqualTo(expectedSecondaryOperations.get(3)); + } + + @Test + public void testGetScanner() { + Scan scan = new Scan(); + ResultScanner scanner = mirroringTable.getScanner(scan); + verify(primaryTable, times(1)).getScanner(scan); + verify(secondaryTable, times(1)).getScanner(scan); + assertThat(scanner).isInstanceOf(MirroringResultScanner.class); + } + + @Test + public void testScanWithScanResultConsumer() { + Scan scan = new Scan(); + ScanResultConsumer consumer = mock(ScanResultConsumer.class); + mirroringTable.scan(scan, consumer); + + verify(primaryTable, times(1)).scan(eq(scan), any(ScanResultConsumer.class)); + verify(secondaryTable, never()).scan(any(Scan.class), any(ScanResultConsumer.class)); + } + + @Test + public void testScanWithAdvancedScanResultConsumer() { + Scan scan = new Scan(); + AdvancedScanResultConsumer consumer = mock(AdvancedScanResultConsumer.class); + mirroringTable.scan(scan, consumer); + + verify(primaryTable, times(1)).scan(eq(scan), any(AdvancedScanResultConsumer.class)); + verify(secondaryTable, never()).scan(any(Scan.class), any(AdvancedScanResultConsumer.class)); + } + + @Test + public void testScanAll() { + Scan scan = new Scan(); + + CompletableFuture> scanAllFuture = new CompletableFuture<>(); + when(primaryTable.scanAll(any(Scan.class))).thenReturn(scanAllFuture); + + CompletableFuture> results = mirroringTable.scanAll(scan); + verify(primaryTable, times(1)).scanAll(scan); + verify(secondaryTable, never()).scanAll(any(Scan.class)); + assertThat(results).isEqualTo(scanAllFuture); + } } diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTableInputModification.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTableInputModification.java index e185517909..6e341fec92 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTableInputModification.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestMirroringAsyncTableInputModification.java @@ -19,18 +19,21 @@ import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createGet; import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createGets; import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createPut; -import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createResult; import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerMock; import static org.mockito.ArgumentMatchers.anyList; import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.lenient; import static org.mockito.Mockito.spy; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; import com.google.common.util.concurrent.SettableFuture; import java.util.ArrayList; @@ -39,9 +42,10 @@ import java.util.List; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; -import org.apache.hadoop.hbase.Cell; +import java.util.stream.Collectors; import org.apache.hadoop.hbase.client.AsyncTable; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; @@ -54,11 +58,10 @@ import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; -import org.mockito.ArgumentMatchers; import org.mockito.Mock; +import org.mockito.invocation.InvocationOnMock; import org.mockito.junit.MockitoJUnit; import org.mockito.junit.MockitoRule; -import org.mockito.stubbing.Answer; @RunWith(JUnit4.class) public class TestMirroringAsyncTableInputModification { @@ -70,13 +73,17 @@ public class TestMirroringAsyncTableInputModification { @Mock FlowController flowController; @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; @Mock ListenableReferenceCounter referenceCounter; + @Mock ExecutorService executorService; + Timestamper timestamper = new NoopTimestamper(); MirroringAsyncTable mirroringTable; - SettableFuture secondaryOperationAllowedFuture; + CompletableFuture letPrimaryThroughFuture; + SettableFuture secondaryOperationBlockedOnFuture; @Before public void setUp() { setupFlowControllerMock(flowController); + this.letPrimaryThroughFuture = new CompletableFuture<>(); this.mirroringTable = spy( new MirroringAsyncTable<>( @@ -86,13 +93,33 @@ public void setUp() { flowController, secondaryWriteErrorConsumer, new MirroringTracer(), - referenceCounter)); - - secondaryOperationAllowedFuture = SettableFuture.create(); - blockMethodCall(secondaryTable, secondaryOperationAllowedFuture).batch(anyList()); + new ReadSampler(100), + timestamper, + referenceCounter, + executorService, + 10)); + + secondaryOperationBlockedOnFuture = SettableFuture.create(); + + lenient().doAnswer(this::answerWithSuccessfulNulls).when(primaryTable).exists(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(primaryTable).get(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(primaryTable).put(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(primaryTable).delete(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(primaryTable).batch(anyList()); + + lenient().doAnswer(this::answerWithSuccessfulNulls).when(secondaryTable).exists(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(secondaryTable).get(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(secondaryTable).put(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(secondaryTable).delete(anyList()); + lenient().doAnswer(this::answerWithSuccessfulNulls).when(secondaryTable).batch(anyList()); + } - mockBatch(this.primaryTable); - mockBatch(this.secondaryTable); + private List> answerWithSuccessfulNulls( + InvocationOnMock invocationOnMock) { + List operations = (List) invocationOnMock.getArguments()[0]; + return operations.stream() + .map(ignored -> (CompletableFuture) letPrimaryThroughFuture) + .collect(Collectors.toList()); } @Test @@ -100,39 +127,75 @@ public void testExists() throws InterruptedException, ExecutionException, Timeou List gets = createGets("k1", "k2", "k3"); List inputList = new ArrayList<>(gets); + CompletableFuture letExistsThroughFuture = new CompletableFuture<>(); + List> expectedResult = + Collections.nCopies(gets.size(), letExistsThroughFuture); + + blockMethodCall(secondaryTable, secondaryOperationBlockedOnFuture).exists(anyList()); + doAnswer(ignored -> expectedResult).when(primaryTable).exists(anyList()); + doAnswer(ignored -> expectedResult).when(secondaryTable).exists(anyList()); + List> results = this.mirroringTable.exists(inputList); - verifyWithInputModification(secondaryOperationAllowedFuture, gets, inputList, results); + inputList.clear(); + letExistsThroughFuture.complete(true); + + results.get(0).get(3, TimeUnit.SECONDS); + verify(this.primaryTable, times(1)).exists(gets); + + secondaryOperationBlockedOnFuture.set(null); + verify(this.secondaryTable, times(1)).exists(gets); } @Test - public void testGet() throws InterruptedException, TimeoutException, ExecutionException { + public void testGets() throws InterruptedException, TimeoutException, ExecutionException { List gets = createGets("k1", "k2", "k3"); List inputList = new ArrayList<>(gets); List> results = this.mirroringTable.get(inputList); - verifyWithInputModification(secondaryOperationAllowedFuture, gets, inputList, results); + inputList.clear(); + letPrimaryThroughFuture.complete(null); + + results.get(0).get(3, TimeUnit.SECONDS); + verify(this.primaryTable, times(1)).get(gets); + + secondaryOperationBlockedOnFuture.set(null); + verify(this.secondaryTable, times(1)).get(gets); } @Test - public void testPut() throws InterruptedException, TimeoutException, ExecutionException { + public void testPuts() throws InterruptedException, TimeoutException, ExecutionException { List puts = Collections.singletonList(createPut("r", "f", "q", "v")); List inputList = new ArrayList<>(puts); List> results = this.mirroringTable.put(inputList); - verifyWithInputModification(secondaryOperationAllowedFuture, puts, inputList, results); + inputList.clear(); + letPrimaryThroughFuture.complete(null); + + results.get(0).get(3, TimeUnit.SECONDS); + verify(this.primaryTable, times(1)).put(puts); + + secondaryOperationBlockedOnFuture.set(null); + verify(this.secondaryTable, times(1)).put(puts); } @Test - public void testDelete() throws InterruptedException, TimeoutException, ExecutionException { + public void testDeletes() throws InterruptedException, TimeoutException, ExecutionException { List puts = Collections.singletonList(new Delete("r".getBytes())); List inputList = new ArrayList<>(puts); List> results = this.mirroringTable.delete(inputList); - verifyWithInputModification(secondaryOperationAllowedFuture, puts, inputList, results); + inputList.clear(); + letPrimaryThroughFuture.complete(null); + + results.get(0).get(3, TimeUnit.SECONDS); + verify(this.primaryTable, times(1)).delete(puts); + + secondaryOperationBlockedOnFuture.set(null); + verify(this.secondaryTable, times(1)).delete(puts); } @Test @@ -142,47 +205,13 @@ public void testBatch() throws InterruptedException, TimeoutException, Execution List> results = this.mirroringTable.batch(inputList); - verifyWithInputModification(secondaryOperationAllowedFuture, ops, inputList, results); - } - - private void verifyWithInputModification( - SettableFuture secondaryOperationAllowedFuture, - List ops, - List inputList, - List> results) - throws InterruptedException, ExecutionException, TimeoutException { + inputList.clear(); + letPrimaryThroughFuture.complete(null); results.get(0).get(3, TimeUnit.SECONDS); verify(this.primaryTable, times(1)).batch(ops); - inputList.clear(); // User modifies the list - secondaryOperationAllowedFuture.set(null); + secondaryOperationBlockedOnFuture.set(null); verify(this.secondaryTable, times(1)).batch(ops); } - - void mockBatch(AsyncTable table) { - doAnswer( - (Answer>>) - invocationOnMock -> { - Object[] args = invocationOnMock.getArguments(); - List operations = (List) args[0]; - List> results = new ArrayList<>(); - - for (Row operation : operations) { - if (operation instanceof Get) { - Get get = (Get) operation; - CompletableFuture result = new CompletableFuture<>(); - result.complete(createResult(get.getRow(), get.getRow())); - results.add(result); - } else { - CompletableFuture result = new CompletableFuture<>(); - result.complete(Result.create(new Cell[0])); - results.add(result); - } - } - return results; - }) - .when(table) - .batch(ArgumentMatchers.anyList()); - } } diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestVerificationSampling.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestVerificationSampling.java new file mode 100644 index 0000000000..2604a22bf8 --- /dev/null +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/TestVerificationSampling.java @@ -0,0 +1,270 @@ +/* + * Copyright 2021 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.google.cloud.bigtable.mirroring.hbase2_x; + +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createGet; +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.createPut; +import static com.google.cloud.bigtable.mirroring.hbase1_x.TestHelpers.setupFlowControllerMock; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +import com.google.cloud.bigtable.mirroring.hbase1_x.ExecutorServiceRule; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.ReadSampler; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.SecondaryWriteErrorConsumerWithMetrics; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.mirroringmetrics.MirroringTracer; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.referencecounting.ListenableReferenceCounter; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.NoopTimestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.timestamper.Timestamper; +import com.google.cloud.bigtable.mirroring.hbase1_x.verification.MismatchDetector; +import com.google.common.collect.ImmutableList; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import org.apache.hadoop.hbase.client.AsyncTable; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.ScanResultConsumerBase; +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import org.mockito.Mock; +import org.mockito.junit.MockitoJUnit; +import org.mockito.junit.MockitoRule; + +@RunWith(JUnit4.class) +public class TestVerificationSampling { + @Rule public final MockitoRule mockitoRule = MockitoJUnit.rule(); + + @Rule + public final ExecutorServiceRule executorServiceRule = + ExecutorServiceRule.singleThreadedExecutor(); + + @Mock AsyncTable primaryTable; + @Mock AsyncTable secondaryTable; + @Mock MismatchDetector mismatchDetector; + @Mock FlowController flowController; + @Mock SecondaryWriteErrorConsumerWithMetrics secondaryWriteErrorConsumer; + @Mock ReadSampler readSampler; + @Mock ListenableReferenceCounter referenceCounter; + Timestamper timestamper = new NoopTimestamper(); + + MirroringAsyncTable mirroringTable; + + Get get = createGet("test"); + List gets = ImmutableList.of(get); + + @Before + public void setUp() { + setupFlowControllerMock(flowController); + this.mirroringTable = + spy( + new MirroringAsyncTable<>( + primaryTable, + secondaryTable, + mismatchDetector, + flowController, + secondaryWriteErrorConsumer, + new MirroringTracer(), + readSampler, + timestamper, + referenceCounter, + executorServiceRule.executorService, + 10)); + } + + public T mockWithCompleteFuture(T table, R result) { + return doReturn(CompletableFuture.completedFuture(result)).when(table); + } + + public T mockWithCompleteFutureList(T table, int len, R result) { + List> results = new ArrayList<>(); + for (int i = 0; i < len; i++) { + results.add(CompletableFuture.completedFuture(result)); + } + return doReturn(results).when(table); + } + + public CompletableFuture allOfList(List> list) { + return CompletableFuture.allOf(list.toArray(new CompletableFuture[0])); + } + + @Test + public void isGetSampled() throws ExecutionException, InterruptedException { + mockWithCompleteFuture(primaryTable, new Result()).get(get); + mockWithCompleteFuture(secondaryTable, new Result()).get(get); + + withSamplingEnabled(false); + mirroringTable.get(get).get(); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(1)).get(get); + verify(secondaryTable, never()).get(get); + + withSamplingEnabled(true); + mirroringTable.get(get).get(); + executorServiceRule.waitForExecutor(); + verify(readSampler, times(2)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(2)).get(get); + verify(secondaryTable, times(1)).get(get); + } + + @Test + public void isGetListSampled() throws ExecutionException, InterruptedException, TimeoutException { + mockWithCompleteFutureList(primaryTable, gets.size(), new Result()).get(gets); + mockWithCompleteFutureList(secondaryTable, gets.size(), new Result()).get(gets); + + withSamplingEnabled(false); + allOfList(mirroringTable.get(gets)).get(1, TimeUnit.SECONDS); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(1)).get(gets); + verify(secondaryTable, never()).get(gets); + + withSamplingEnabled(true); + allOfList(mirroringTable.get(gets)).get(1, TimeUnit.SECONDS); + executorServiceRule.waitForExecutor(); + verify(readSampler, times(2)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(2)).get(gets); + verify(secondaryTable, times(1)).get(gets); + } + + @Test + public void isExistsSampled() throws ExecutionException, InterruptedException, TimeoutException { + mockWithCompleteFuture(primaryTable, Boolean.TRUE).exists(get); + mockWithCompleteFuture(secondaryTable, Boolean.TRUE).exists(get); + + withSamplingEnabled(false); + mirroringTable.exists(get).get(1, TimeUnit.SECONDS); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(1)).exists(get); + verify(secondaryTable, never()).exists(get); + + withSamplingEnabled(true); + mirroringTable.exists(get).get(1, TimeUnit.SECONDS); + executorServiceRule.waitForExecutor(); + verify(readSampler, times(2)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(2)).exists(get); + verify(secondaryTable, times(1)).exists(get); + } + + @Test + public void isExistsAllSampled() + throws ExecutionException, InterruptedException, TimeoutException { + mockWithCompleteFutureList(primaryTable, gets.size(), Boolean.TRUE).exists(gets); + mockWithCompleteFutureList(secondaryTable, gets.size(), Boolean.TRUE).exists(gets); + + withSamplingEnabled(false); + allOfList(mirroringTable.exists(gets)).get(1, TimeUnit.SECONDS); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(1)).exists(gets); + verify(secondaryTable, never()).exists(gets); + + withSamplingEnabled(true); + allOfList(mirroringTable.exists(gets)).get(1, TimeUnit.SECONDS); + executorServiceRule.waitForExecutor(); + verify(readSampler, times(2)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(2)).exists(gets); + verify(secondaryTable, times(1)).exists(gets); + } + + @Test + public void isBatchSampledWithSamplingEnabled() + throws InterruptedException, ExecutionException, TimeoutException { + Put put = createPut("test", "test", "test", "test"); + List ops = ImmutableList.of(get, put); + mockWithCompleteFutureList(primaryTable, 2, new Result()).batch(ops); + mockWithCompleteFutureList(secondaryTable, 2, new Result()).batch(ops); + + withSamplingEnabled(true); + allOfList(mirroringTable.batch(ops)).get(1, TimeUnit.SECONDS); + executorServiceRule.waitForExecutor(); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(1)).batch(ops); + verify(secondaryTable, times(1)).batch(ops); + } + + @Test + public void isBatchSampledWithSamplingDisabled() + throws InterruptedException, ExecutionException, TimeoutException { + Put put = createPut("test", "test", "test", "test"); + List ops = ImmutableList.of(get, put); + mockWithCompleteFutureList(primaryTable, 2, new Result()).batch(ops); + mockWithCompleteFutureList(secondaryTable, 2, new Result()).batch(ops); + + withSamplingEnabled(false); + allOfList(mirroringTable.batch(ops)).get(1, TimeUnit.SECONDS); + executorServiceRule.waitForExecutor(); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + verify(primaryTable, times(1)).batch(ops); + verify(secondaryTable, times(1)).batch(ImmutableList.of(put)); + } + + @Test + public void isResultScannerSampled() { + mirroringTable.getScanner(new Scan()); + verify(readSampler, times(1)).shouldNextReadOperationBeSampled(); + } + + @Test + public void testResultScannerWithSampling() throws IOException { + ResultScanner primaryScanner = mock(ResultScanner.class); + ResultScanner secondaryScanner = mock(ResultScanner.class); + doReturn(primaryScanner).when(primaryTable).getScanner(any(Scan.class)); + doReturn(secondaryScanner).when(secondaryTable).getScanner(any(Scan.class)); + + withSamplingEnabled(true); + + ResultScanner s = mirroringTable.getScanner(new Scan()); + s.next(); + executorServiceRule.waitForExecutor(); + verify(primaryScanner, times(1)).next(); + verify(secondaryScanner, times(1)).next(); + } + + @Test + public void testResultScannerWithoutSampling() throws IOException { + ResultScanner primaryScanner = mock(ResultScanner.class); + ResultScanner secondaryScanner = mock(ResultScanner.class); + doReturn(primaryScanner).when(primaryTable).getScanner(any(Scan.class)); + doReturn(secondaryScanner).when(secondaryTable).getScanner(any(Scan.class)); + + withSamplingEnabled(false); + + ResultScanner s = mirroringTable.getScanner(new Scan()); + s.next(); + executorServiceRule.waitForExecutor(); + verify(primaryScanner, times(1)).next(); + verify(secondaryScanner, never()).next(); + } + + private void withSamplingEnabled(boolean b) { + doReturn(b).when(readSampler).shouldNextReadOperationBeSampled(); + } +} diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/TestAsyncRequestScheduling.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/TestAsyncRequestScheduling.java index be2673c9b7..9ff685452a 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/TestAsyncRequestScheduling.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/com/google/cloud/bigtable/mirroring/hbase2_x/utils/TestAsyncRequestScheduling.java @@ -17,34 +17,38 @@ import static com.google.cloud.bigtable.mirroring.hbase2_x.utils.AsyncRequestScheduling.reserveFlowControlResourcesThenScheduleSecondary; import static com.google.common.truth.Truth.assertThat; +import static org.junit.Assert.assertThrows; import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.nullable; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.never; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; -import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController; +import com.google.cloud.bigtable.mirroring.hbase1_x.utils.flowcontrol.FlowController.ResourceReservation; import com.google.common.util.concurrent.FutureCallback; import java.io.IOException; -import java.util.ArrayList; -import java.util.List; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException; import java.util.function.Consumer; import java.util.function.Function; import java.util.function.Supplier; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.client.Result; import org.junit.Test; public class TestAsyncRequestScheduling { @Test public void testExceptionalPrimaryFuture() throws ExecutionException, InterruptedException { - CompletableFuture exceptionalFuture = new CompletableFuture<>(); + // We want to test that when primary database fails and returns an exceptional future, + // no request is sent to secondary database and FlowController is appropriately dealt with. + + CompletableFuture exceptionalPrimaryFuture = new CompletableFuture<>(); IOException ioe = new IOException("expected"); - exceptionalFuture.completeExceptionally(ioe); + exceptionalPrimaryFuture.completeExceptionally(ioe); - FlowController.ResourceReservation resourceReservation = - mock(FlowController.ResourceReservation.class); - CompletableFuture resourceReservationFuture = + ResourceReservation resourceReservation = mock(ResourceReservation.class); + CompletableFuture resourceReservationFuture = CompletableFuture.completedFuture(resourceReservation); Supplier> secondaryFutureSupplier = mock(Supplier.class); @@ -53,59 +57,76 @@ public void testExceptionalPrimaryFuture() throws ExecutionException, Interrupte AsyncRequestScheduling.OperationStages> result = reserveFlowControlResourcesThenScheduleSecondary( - exceptionalFuture, + exceptionalPrimaryFuture, resourceReservationFuture, secondaryFutureSupplier, verificationCreator, flowControlReservationErrorHandler); + // reserveFlowControlResourcesThenScheduleSecondary() returns a pair of futures: + // - verificationCompleted: completed with null when secondary request + // and its verification are finished. It's never completed exceptionally, + // - userNotified: completed with primary request value or exception + // after receiving a ResourceReservation from FlowController. + + // We make sure that verificationCompleted is completed as expected. result.getVerificationCompletedFuture().get(); - final List resultThrowableList = new ArrayList<>(); - result - .userNotified - .exceptionally( - t -> { - resultThrowableList.add(t); - return null; - }) - .get(); - assertThat(resultThrowableList.size()).isEqualTo(1); - assertThat(resultThrowableList.get(0)).isEqualTo(ioe); + // We make sure that userNotified passes the primary result through. + // Note that CompletableFuture#get() wraps the exception in an ExecutionException. + Exception primaryException = assertThrows(ExecutionException.class, result.userNotified::get); + assertThat(primaryException.getCause()).isEqualTo(ioe); + + // resourceReservationFuture is a normally completed future so + // flowControlReservationErrorHandler was never called. + verify(flowControlReservationErrorHandler, never()).accept(nullable(Throwable.class)); + // The obtained resources must be released. verify(resourceReservation, times(1)).release(); - verify(verificationCreator, never()).apply((Void) any()); - verify(secondaryFutureSupplier, never()).get(); - verify(flowControlReservationErrorHandler, never()).accept(any()); - assertThat(resourceReservationFuture.isCancelled()); + // Primary request failed, so there was no secondary request nor verification. + verify(secondaryFutureSupplier, never()).get(); + verify(verificationCreator, never()).apply(nullable(Void.class)); } @Test public void testExceptionalReservationFuture() throws ExecutionException, InterruptedException { - CompletableFuture primaryFuture = CompletableFuture.completedFuture(null); - CompletableFuture exceptionalFuture = - new CompletableFuture<>(); + // reserveFlowControlResourcesThenScheduleSecondary() returns a pair of futures: + // - verificationCompleted: completed with null when secondary request + // and its verification are finished. It's never completed exceptionally, + // - userNotified: completed with primary request value or exception + // after receiving a ResourceReservation from FlowController. + + // We want to test that when FlowController fails and returns an exceptional future, + // both of the result futures are completed as expected. + + Result primaryResult = Result.create(new Cell[0]); + CompletableFuture primaryFuture = CompletableFuture.completedFuture(primaryResult); + CompletableFuture exceptionalReservationFuture = new CompletableFuture<>(); IOException ioe = new IOException("expected"); - exceptionalFuture.completeExceptionally(ioe); + exceptionalReservationFuture.completeExceptionally(ioe); - Supplier> secondaryFutureSupplier = mock(Supplier.class); - Function> verificationCreator = mock(Function.class); + Supplier> secondaryFutureSupplier = mock(Supplier.class); + Function> verificationCreator = mock(Function.class); Consumer flowControlReservationErrorHandler = mock(Consumer.class); - AsyncRequestScheduling.OperationStages> result = + AsyncRequestScheduling.OperationStages> result = reserveFlowControlResourcesThenScheduleSecondary( primaryFuture, - exceptionalFuture, + exceptionalReservationFuture, secondaryFutureSupplier, verificationCreator, flowControlReservationErrorHandler); - final List resultThrowableList = new ArrayList<>(); - result.userNotified.get(); + // We make sure that userNotified passes the primary result through. + assertThat(result.userNotified.get()).isEqualTo(primaryResult); + // We make sure that verificationCompleted is completed normally. result.getVerificationCompletedFuture().get(); - verify(verificationCreator, never()).apply((Void) any()); + // FlowController failed so the appropriate error handler was called. + verify(flowControlReservationErrorHandler, times(1)).accept(any(Throwable.class)); + + // FlowController failed, so there was no secondary request nor verification. + verify(verificationCreator, never()).apply(nullable(Result.class)); verify(secondaryFutureSupplier, never()).get(); - verify(flowControlReservationErrorHandler, times(1)).accept(any()); } } diff --git a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/org/apache/hadoop/hbase/client/TestRegistry.java b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/org/apache/hadoop/hbase/client/TestRegistry.java index 331c1174f7..cfff359fa3 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/org/apache/hadoop/hbase/client/TestRegistry.java +++ b/bigtable-hbase-mirroring-client-2.x-parent/bigtable-hbase-mirroring-client-2.x/src/test/java/org/apache/hadoop/hbase/client/TestRegistry.java @@ -15,7 +15,12 @@ */ package org.apache.hadoop.hbase.client; /** - * AsyncRegistry is private in org.apache.hadoop.hbase.client so this mock must be in the same + * It's necessary for unit testing of {@link + * com.google.cloud.bigtable.mirroring.hbase2_x.MirroringAsyncConnection} as {@link + * org.apache.hadoop.hbase.client.ConnectionFactory#createAsyncConnection()} checks cluster id + * during {@link org.apache.hadoop.hbase.client.AsyncConnection} creation. + * + *

AsyncRegistry is private in org.apache.hadoop.hbase.client so this mock must be in the same * package. */ import java.util.concurrent.CompletableFuture; diff --git a/bigtable-hbase-mirroring-client-2.x-parent/pom.xml b/bigtable-hbase-mirroring-client-2.x-parent/pom.xml index a5243f8d33..dc7f2d2e3e 100644 --- a/bigtable-hbase-mirroring-client-2.x-parent/pom.xml +++ b/bigtable-hbase-mirroring-client-2.x-parent/pom.xml @@ -16,6 +16,18 @@ limitations under the License. --> 4.0.0 + + + + org.apache.maven.plugins + maven-compiler-plugin + + ${compileSource.1.8} + ${compileSource.1.8} + + + + com.google.cloud.bigtable @@ -32,5 +44,7 @@ limitations under the License. bigtable-hbase-mirroring-client-2.x + bigtable-hbase-mirroring-client-1.x-2.x-integration-tests + bigtable-hbase-mirroring-client-2.x-integration-tests diff --git a/quickstart.md b/quickstart.md new file mode 100644 index 0000000000..5ca6cab596 --- /dev/null +++ b/quickstart.md @@ -0,0 +1,179 @@ +# Mirroring client +## High-level overview +The aim of this project is to provide a drop-in replacement for the HBase client that mirrors operations performed on the HBase cluster to the Bigtable cluster to facilitate migration from HBase to Bigtable. + +The client connects two databases, called a primary and a secondary. +By default operations are performed on the primary database and successful ones are replayed on the secondary asynchronously (this behaviour is configurable). +The client does a best-effort attempt to keep the databases in sync, however, it does not ensure consistency. +When a write to the secondary database fails it is (depending on the mode, described below) written to a log on disk so the user can replay it manually later, or thrown as an exception to the user. +The consistency of both databases is verified when reads are performed. A fraction of reads is replayed on the secondary database and their content is compared - mismatches are reported as a log message. +Handling of write errors and read mismatches can be overridden by the user. + +HBaseAdmin is not supported. + +## Example configuration +The Mirroring Client reads its configuration from default hbase configuration store (by default, `hbase-size.xml` file). In the simplest case the user just have to merge their HBase and bigtable-hbase-java configurations into a single file and correctly set `hbase.client.connection.impl`, `google.bigtable.mirroring.primary-client.connection.impl` and `google.bigtable.mirroring.secondary-client.connection.impl` keys as shown in the example below. + +This configuration mirrors HBase 1.x (primary database) to a Bigtable instance. +```xml + + + hbase.client.connection.impl + com.google.cloud.bigtable.mirroring.hbase1_x.MirroringConnection + + + + + google.bigtable.mirroring.primary-client.connection.impl + default + + + + hbase.zookeeper.quorum + zookeeper-url + + + + hbase.zookeeper.property.clientPort + 2181 + + + + + google.bigtable.mirroring.secondary-client.connection.impl + com.google.cloud.bigtable.hbase1_x.BigtableConnection + + + + google.bigtable.project.id + project-id + + + + google.bigtable.instance.id + instance-id + + +``` + +For mirroring HBase 2.x to Bigtable the following keys should be used. +```xml + + + hbase.client.connection.impl + com.google.cloud.bigtable.mirroring.hbase2_x.MirroringConnection + + + + google.bigtable.mirroring.primary-client.connection.impl + com.google.cloud.bigtable.hbase2_x.BigtableConnection + + + + google.bigtable.mirroring.secondary-client.connection.impl + default + + + + google.bigtable.mirroring.primary-client.async.connection.impl + com.google.cloud.bigtable.hbase2_x.BigtableAsyncConnection + + + + google.bigtable.mirroring.secondary-client.async.connection.impl + default + + +``` + +### Prefixes +Both connections are constructed with all keys from the configuration file which means that the bigtable-hbase-java client will have access to HBase keys what shouldn't be a problem in the common case. If it happens to cause some difficulties then prefixes can be used to limit the set of configuration keys that will be available to each connection - setting a `google.bigtable.mirroring.primary-client.prefix` key to `foo` will cause the primary connection to receive only those keys starting with `foo.` prefix, without the prefix. (e.g. `foo.hbase.zookeeper.quorum` would be passed to primary connection as `hbase.zookeeper.quorum`). The user should set either none or both prefix keys. Keys that do not begin with one of the prefixes won't be passed to any of the databases. + +## Write modes +Three write modes are available with different guarantees and trade-offs. +| Mode | Description | Source of truth | Operation latency | Secondary mutation failures | Flow control | +|:---------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:| +| Asynchronous sequential (default) | Operations to the secondary DB are started only after their counterparts to the primary DB finish and complete in the background | A mutation is applied to the secondary DB only if was applied to the primary database | The same as primary DB's (except if flow control kicks in) | Operations which succeeded on the primary DB but failed on the secondary are written to the failed mutation log | If the mutation backlog to the secondary DB grows beyond a configurable limit, all requests are delayed | +| Synchronous sequential | Operations to the secondary DB are started only after their counterparts to the primary DB finish and only when both finish are the users' operations completed | Same as Asynchronous sequential | The latency is the sum of primary and secondary databases | Operations which succeeded on the primary DB but failed on the secondary are yield the combined operation failed | No additional flow control because no operations are performed in the background | +| Synchronous Concurrent | Operations to both databases are started simultaneously (if they can) and only when both finish are the users' operations completed | An operation may be applied only to the primary DB, secondary DB, both or none | The slower of the two databases | If either of the mutations to the primary or secondary DB fails, the user's request fails | No additional flow control because no operations are performed in the background | + +Exceptions thrown in synchronous mode are annotated with MirroringOperationException containing detailed information about failed operations. For more details please consult the docs of this class. + +Set `google.bigtable.mirroring.synchronous-writes` to `true` to enable synchronous writes (defaults to false). +Set `google.bigtable.mirroring.concurrent-writes` to `true` to enable concurrent writes (defaults to false). +Asynchronous concurrent mode is not supported. + +## Read verification +A fraction of reads performed on the primary database is replayed on the secondary database to verify consistency. Fraction of reads verified can be controlled by `google.bigtable.mirroring.read-verification-rate-percent`. Each read operation, be it a single Get, Get with a list, `batch()` of operations that might contain reads, or even `getScanner(Scan)` - `Results` returned by that operation (or scanner) are verified with the secondary or none of them. + +Users can provide custom detection and handling of errors by extending the `MismatchDetector` class and setting `google.bigtable.mirroring.mismatch-detector.factory-impl` key to a path of a class on the classpath. + +## Flow control +To limit the number of operations that have not yet been mirrored on the secondary database we've introduced a flow control mechanism that throttles user code performing operations on the primary database if the secondary database is slower and is not keeping up. +The same feature is used to limit the amount of memory used by pending secondary operations. +Operations are throttled until some operations on the secondary database complete (successfully or not) and free up enough resources. +Synchronous operations are throttled by blocking user code in blocking operations until the operation is started on the secondary, +Asynchronous operations are throttled by delaying completing futures until the same moment. + +## Faillog +Failed writes to secondary database in asynchronous mode are reported to a Faillog, which, by default dumps failed mutations as JSONs to an on-disk log. The users can replay that log on their own. +Log files location can be controlled by setting `google.bigtable.mirroring.write-error-log.appender.prefix-path` key. `google.bigtable.mirroring.write-error-log.appender.max-buffer-size` and `google.bigtable.mirroring.write-error-log.appender.drop-on-overflow` can be used to alter default appender behavior. + +The user can implement custom handling of failed mutations by overriding default `google.bigtable.mirroring.write-error-consumer.factory-impl` and `google.bigtable.mirroring.write-error-log.appender.factory-impl` keys to user-defined classes on classpath. + +## OpenCensus +The Mirroring Client exposes OpenCensus metrics and traces. Metrics are prefixed with `cloud.google.com/java/mirroring`. + +## Buffered mutator +Mirroring Client's Buffered Mutator works in two modes: sequential and concurrent. +In the sequential mode, the Mirroring Buffered Mutator passes mutations to the underlying Primary Buffered Mutator and stores mutations in an internal buffer. When the size of the buffer exceeds `google.bigtable.mirroring.buffered-mutator.bytes-to-flush` the Primary Buffered Mutator is flushed. After the flush, mutations that did not fail are passed to Secondary Buffered Mutator, which is flushed immediately afterwards. The flushes happen asynchronously and do not block user code. +`flush()` operations issued by the user starts Primary Buffered Mutator flush and blocks until it is finished, Secondary Buffered Mutator is flushed asynchronously. + +In the concurrent mode, writes are passed to both mutators at once. As in sequential mode, the mutations are stored in the internal buffer and a flush is performed periodically. Write errors encountered by the flush are reported back to the user (as exceptions) when the user interacts with the BufferedMutator for the first time after the errors were detected. Reported exceptions are annotated with MirroringOperationException. +`flush()` operations issued by the user flushes both underlying Buffered Mutators concurrently and blocks until they are finished. + +Set `google.bigtable.mirroring.concurrent-writes` to `true` to enable concurrent Buffered Mutator mode (defaults to false). + +## Client-side timestamping +HBase and Bigtable assign row version (timestamp) based on server-side time for mutations (unless version was explicitly assigned by the client). The Mirroring Client issues writes to underlying databases a few milliseconds apart and performing mutations without version assigned on the client side will cause inconsistencies between databases. To mitigate some of those issues a client-side timestamping is available in the Mirroring Client. When client-side timestamping is enabled the Mirroring Client will automatically add a timestamp based on client's machine's time to every `Put` object passed to the Mirroring Client. Client-side timestamps assigned by `Table`s and `BufferedMutator`s created by one `Connection` are always increasing, even if system clock is moved backwards, for example by NTP or manually by the user. +Be aware that client-side timestamping modifies only `Put`s - `Delete`s, `Increment`s and `Append`s are not affected by this setting and will cause inconsistencies between databases. +Client-side timestamping, if enabled, can use two modes - `inplace` and `copy`. `inplace` mode modifies provided `Put`s in-place, which is efficient but is not correct is the user reuses `Put` objects between calls. When `Put`s are reused the `copy` mode should be used - it will create a copy of each `Put` before assigning a timestamp and provided object can be safely reused in subsequent calls (please note that mutations passed to Mirroring Client are also used asynchronously and this safety guarantee is only provided if synchronous mode is enabled). +Client-side timestamping is enabled by default in `inplace` mode. +Use `google.bigtable.mirroring.enable-default-client-side-timestamps` property to disable it or change the mode. + +Place read a warning in `Caveats - Timestamps` section to decide which mode fits you use case best. + + +## Configuration options +- `google.bigtable.mirroring.primary-client.connection.impl` - a name of Connection class that should be used to connect to primary database. It is used as hbase.client.connection.impl when creating connection to primary database. Set to `default` to use default HBase connection class. Required. +- `google.bigtable.mirroring.secondary-client.connection.impl` - a name of Connection class that should be used to connect to secondary database. It is used as hbase.client.connection.impl when creating connection to secondary database. Set to an `default` to use default HBase connection class. Required. +- `google.bigtable.mirroring.primary-client.async.connection.impl` - a name of Connection class that should be used to connect asynchronously to primary database. It is used as hbase.client.async.connection.impl when creating connection to primary database. Set to `default` to use default HBase connection class. Required when using HBase 2.x. +- `google.bigtable.mirroring.secondary-client.async.connection.impl` - a name of Connection class that should be used to connect asynchronously to secondary database. It is used as hbase.client.async.connection.impl when creating connection to secondary database. Set to `default` to use default HBase connection class. Required when using HBase 2.x. +- `google.bigtable.mirroring.primary-client.prefix` - By default all parameters from the Configuration object passed to ConnectionFactory#createConnection are passed to Connection instances. If this key is set, then only parameters that start with the given prefix are passed to the primary connection. Use it if primary and secondary connections' configurations share a key that should have a different value passed to each of the connections, e.g. zookeeper url. Prefixes should not contain a dot at the end. default: empty. +- `google.bigtable.mirroring.secondary-client.prefix` - If this key is set, then only parameters that start with given prefix are passed to secondary Connection. default: empty. +- `google.bigtable.mirroring.mismatch-detector.factory-impl` - Path to class implementing MismatchDetector.Factory. default: DefaultMismatchDetector.Factory, logs detected mismatches to stdout and reports them as OpenCensus metrics. +- `google.bigtable.mirroring.flow-controller-strategy.factory-impl` - Path to class to be used as FlowControllerStrategy.Factory. default: RequestCountingFlowControlStrategy.Factory. Used to throttle primary database requests in case of slower secondary. +- `google.bigtable.mirroring.flow-controller-strategy.max-outstanding-requests` - Maximal number of outstanding secondary database requests before throttling requests to primary database. default: 500. +- `google.bigtable.mirroring.flow-controller-strategy.max-used-bytes` - Maximal number of bytes used by internal buffers for asynchronous operations before throttling requests to primary database. default: 256MB. +- `google.bigtable.mirroring.write-error-consumer.factory-impl` - Path to a factory of a class to be used as consumer for secondary database write errors. default: DefaultSecondaryWriteErrorConsumer.Factory, forwards errors to faillog using Appender and Serializer. +- `google.bigtable.mirroring.write-error-log.serializer.factory-impl` - Factory of faillog Serializer class implementation, responsible for serializing write errors reported by the Logger to binary representation, which is later appended to resulting file by the Appender. default: DefaultSerializer.Factory, dumps supplied mutation along with error stacktrace as JSON. +- `google.bigtable.mirroring.write-error-log.appender.factory-impl` - Factory of faillog Appender class implementation. default: DefaultAppender.Factory, writes data serialized by Serializer implementation to file on disk. +- `google.bigtable.mirroring.write-error-log.appender.prefix-path` - used by DefaultAppender, prefix used for generating the name of the log file. Required. +- `google.bigtable.mirroring.write-error-log.appender.max-buffer-size` - used by DefaultAppender, maxBufferSize maximum size of the buffer used for communicating with the thread flushing the data to disk. default: 20971520 bytes (20 MB). +- `google.bigtable.mirroring.write-error-log.appender.drop-on-overflow` - used by DefaultAppender, whether to drop data if the thread flushing the data to disk is not keeping up or to block until it catches up. default: false. +- `google.bigtable.mirroring.read-verification-rate-percent` - Integer value representing percentage of read operations performed on primary database that should be verified against secondary. Each call to `Table#get(Get)`, `Table#get(List)`, `Table#exists(Get)`, `Table#existsAll(List)`, `Table#batch(List, Object[])` (with overloads) and `Table#getScanner(Scan)` (with overloads) is counted as a single operation, independent of size of their arguments and results. Correct values are a integers ranging from 0 to 100 inclusive. default: 100. +- `google.bigtable.mirroring.buffered-mutator.bytes-to-flush` - Number of bytes that `MirroringBufferedMutator` should buffer before flushing underlying primary BufferedMutator and executing a write to the secondary database. If not set the value of `hbase.client.write.buffer` is used, which by default is 2MB. When those values are kept in sync, the mirroring client should perform a flush operation on the primary BufferedMutator right after it schedules a new asynchronous write to the database. +- `google.bigtable.mirroring.enable-default-client-side-timestamps` - Select client-side timestamping mode. `disabled`, `inplace` and `copy` are only valid values. default: `inplace`. + + +## Caveats +### Timestamps +Be aware that client-side timestamping modifies only `Put`s - `Delete`s, `Increment`s and `Append`s are not affected by this setting and will cause inconsistencies between databases. +Using client-side timestamping fixes inconsistencies caused by `Put`s, but can lead to **lost writes** if multiple machines are modifying the same cell and have clocks out of sync - writes from machine with correctly set time will be masked by puts from a machine with time in the future (clock out of sync), independent of the real order of those `Put`s. This problem wouldn't appear if client-side timestamping was disabled - timestamps would be assigned by the server and would reflect real ordering. +### Differences between Bigtable and HBase +There are differences between HBase and Bigtable, please consult [this link](https://cloud.google.com/bigtable/docs/hbase-differences). +Code using this client should be aware of them. +### Mirroring Increments and Appends +`increment` and `append` operations do not allow to specify a timestamp of the new version to create. To keep databases consistent the Mirroring Client mirrors these operations as `Put`s inserting return values of these methods. This also applies to `Increment`s and `Append`s performed in `batch()` operation. For that reason, those operations have to be mirrored sequentially, even if concurrent write mode is enabled. +### Verification of 2.x scans is not performed +`AsyncTable#scan(Scan)` operation results are not verified for consistency with the secondary database. bigtable-hbase-java client doesn't support AdvancedScanResultConsumer and we would not be able to throttle its operations when Bigtable would be used as a primary database and the secondary database would be significantly slower.