Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): bump rocksdbjni from 6.19.3 to 6.22.1 #15

Closed

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Jul 23, 2021

Bumps rocksdbjni from 6.19.3 to 6.22.1.

Release notes

Sourced from rocksdbjni's releases.

RocksDB 6.22.1

6.22.1 (2021-06-25)

Bug Fixes

  • GetLiveFilesMetaData() now populates the temperature, oldest_ancester_time, and file_creation_time fields of its LiveFileMetaData results when the information is available. Previously these fields always contained zero indicating unknown.

6.22.0 (2021-06-18)

Behavior Changes

  • Added two additional tickers, MEMTABLE_PAYLOAD_BYTES_AT_FLUSH and MEMTABLE_GARBAGE_BYTES_AT_FLUSH. These stats can be used to estimate the ratio of "garbage" (outdated) bytes in the memtable that are discarded at flush time.
  • Added API comments clarifying safe usage of Disable/EnableManualCompaction and EventListener callbacks for compaction.

Bug Fixes

  • fs_posix.cc GetFreeSpace() always report disk space available to root even when running as non-root. Linux defaults often have disk mounts with 5 to 10 percent of total space reserved only for root. Out of space could result for non-root users.
  • Subcompactions are now disabled when user-defined timestamps are used, since the subcompaction boundary picking logic is currently not timestamp-aware, which could lead to incorrect results when different subcompactions process keys that only differ by timestamp.
  • Fix an issue that DeleteFilesInRange() may cause ongoing compaction reports corruption exception, or ASSERT for debug build. There's no actual data loss or corruption that we find.
  • Fixed confusingly duplicated output in LOG for periodic stats ("DUMPING STATS"), including "Compaction Stats" and "File Read Latency Histogram By Level".
  • Fixed performance bugs in background gathering of block cache entry statistics, that could consume a lot of CPU when there are many column families with a shared block cache.

New Features

  • Marked the Ribbon filter and optimize_filters_for_memory features as production-ready, each enabling memory savings for Bloom-like filters. Use NewRibbonFilterPolicy in place of NewBloomFilterPolicy to use Ribbon filters instead of Bloom, or ribbonfilter in place of bloomfilter in configuration string.
  • Allow DBWithTTL to use DeleteRange api just like other DBs. DeleteRangeCF() which executes WriteBatchInternal::DeleteRange() has been added to the handler in DBWithTTLImpl::Write() to implement it.
  • Add BlockBasedTableOptions.prepopulate_block_cache. If enabled, it prepopulate warm/hot data blocks which are already in memory into block cache at the time of flush. On a flush, the data block that is in memory (in memtables) get flushed to the device. If using Direct IO, additional IO is incurred to read this data back into memory again, which is avoided by enabling this option and it also helps with Distributed FileSystem. More details in include/rocksdb/table.h.
  • Added a cancel field to CompactRangeOptions, allowing individual in-process manual range compactions to be cancelled.

RocksDB 6.20.3

6.20.3 (2021-05-05)

Bug Fixes

  • Fixed a bug where GetLiveFiles() output included a non-existent file called "OPTIONS-000000". Backups and checkpoints, which use GetLiveFiles(), failed on DBs impacted by this bug. Read-write DBs were impacted when the latest OPTIONS file failed to write and fail_if_options_file_error == false. Read-only DBs were impacted when no OPTIONS files existed.

6.20.2 (2021-04-23)

Bug Fixes

  • Fixed a bug in handling file rename error in distributed/network file systems when the server succeeds but client returns error. The bug can cause CURRENT file to point to non-existing MANIFEST file, thus DB cannot be opened.
  • Fixed a bug where ingested files were written with incorrect boundary key metadata. In rare cases this could have led to a level's files being wrongly ordered and queries for the boundary keys returning wrong results.
  • Fixed a data race between insertion into memtables and the retrieval of the DB properties rocksdb.cur-size-active-mem-table, rocksdb.cur-size-all-mem-tables, and rocksdb.size-all-mem-tables.
  • Fixed the false-positive alert when recovering from the WAL file. Avoid reporting "SST file is ahead of WAL" on a newly created empty column family, if the previous WAL file is corrupted.

Behavior Changes

  • Due to the fix of false-postive alert of "SST file is ahead of WAL", all the CFs with no SST file (CF empty) will bypass the consistency check. We fixed a false-positive, but introduced a very rare true-negative which will be triggered in the following conditions: A CF with some delete operations in the last a few queries which will result in an empty CF (those are flushed to SST file and a compaction triggered which combines this file and all other SST files and generates an empty CF, or there is another reason to write a manifest entry for this CF after a flush that generates no SST file from an empty CF). The deletion entries are logged in a WAL and this WAL was corrupted, while the CF's log number points to the next WAL (due to the flush). Therefore, the DB can only recover to the point without these trailing deletions and cause the inconsistent DB status.

6.20.0 (2021-04-16)

Behavior Changes

  • ColumnFamilyOptions::sample_for_compression now takes effect for creation of all block-based tables. Previously it only took effect for block-based tables created by flush.
  • CompactFiles() can no longer compact files from lower level to up level, which has the risk to corrupt DB (details: #8063). The validation is also added to all compactions.
  • Fixed some cases in which DB::OpenForReadOnly() could write to the filesystem. If you want a Logger with a read-only DB, you must now set DBOptions::info_log yourself, such as using CreateLoggerFromOptions().
  • get_iostats_context() will never return nullptr. If thread-local support is not available, and user does not opt-out iostats context, then compilation will fail. The same applies to perf context as well.

Bug Fixes

  • Use thread-safe strerror_r() to get error messages.
  • Fixed a potential hang in shutdown for a DB whose Env has high-pri thread pool disabled (Env::GetBackgroundThreads(Env::Priority::HIGH) == 0)
  • Made BackupEngine thread-safe and added documentation comments to clarify what is safe for multiple BackupEngine objects accessing the same backup directory.
  • Fixed crash (divide by zero) when compression dictionary is applied to a file containing only range tombstones.

... (truncated)

Changelog

Sourced from rocksdbjni's changelog.

6.22.1 (2021-06-25)

Bug Fixes

  • GetLiveFilesMetaData() now populates the temperature, oldest_ancester_time, and file_creation_time fields of its LiveFileMetaData results when the information is available. Previously these fields always contained zero indicating unknown.

6.22.0 (2021-06-18)

Behavior Changes

  • Added two additional tickers, MEMTABLE_PAYLOAD_BYTES_AT_FLUSH and MEMTABLE_GARBAGE_BYTES_AT_FLUSH. These stats can be used to estimate the ratio of "garbage" (outdated) bytes in the memtable that are discarded at flush time.
  • Added API comments clarifying safe usage of Disable/EnableManualCompaction and EventListener callbacks for compaction.

Bug Fixes

  • fs_posix.cc GetFreeSpace() always report disk space available to root even when running as non-root. Linux defaults often have disk mounts with 5 to 10 percent of total space reserved only for root. Out of space could result for non-root users.
  • Subcompactions are now disabled when user-defined timestamps are used, since the subcompaction boundary picking logic is currently not timestamp-aware, which could lead to incorrect results when different subcompactions process keys that only differ by timestamp.
  • Fix an issue that DeleteFilesInRange() may cause ongoing compaction reports corruption exception, or ASSERT for debug build. There's no actual data loss or corruption that we find.
  • Fixed confusingly duplicated output in LOG for periodic stats ("DUMPING STATS"), including "Compaction Stats" and "File Read Latency Histogram By Level".
  • Fixed performance bugs in background gathering of block cache entry statistics, that could consume a lot of CPU when there are many column families with a shared block cache.

New Features

  • Marked the Ribbon filter and optimize_filters_for_memory features as production-ready, each enabling memory savings for Bloom-like filters. Use NewRibbonFilterPolicy in place of NewBloomFilterPolicy to use Ribbon filters instead of Bloom, or ribbonfilter in place of bloomfilter in configuration string.
  • Allow DBWithTTL to use DeleteRange api just like other DBs. DeleteRangeCF() which executes WriteBatchInternal::DeleteRange() has been added to the handler in DBWithTTLImpl::Write() to implement it.
  • Add BlockBasedTableOptions.prepopulate_block_cache. If enabled, it prepopulate warm/hot data blocks which are already in memory into block cache at the time of flush. On a flush, the data block that is in memory (in memtables) get flushed to the device. If using Direct IO, additional IO is incurred to read this data back into memory again, which is avoided by enabling this option and it also helps with Distributed FileSystem. More details in include/rocksdb/table.h.
  • Added a cancel field to CompactRangeOptions, allowing individual in-process manual range compactions to be cancelled.

6.21.0 (2021-05-21)

Bug Fixes

  • Fixed a bug in handling file rename error in distributed/network file systems when the server succeeds but client returns error. The bug can cause CURRENT file to point to non-existing MANIFEST file, thus DB cannot be opened.
  • Fixed a bug where ingested files were written with incorrect boundary key metadata. In rare cases this could have led to a level's files being wrongly ordered and queries for the boundary keys returning wrong results.
  • Fixed a data race between insertion into memtables and the retrieval of the DB properties rocksdb.cur-size-active-mem-table, rocksdb.cur-size-all-mem-tables, and rocksdb.size-all-mem-tables.
  • Fixed the false-positive alert when recovering from the WAL file. Avoid reporting "SST file is ahead of WAL" on a newly created empty column family, if the previous WAL file is corrupted.
  • Fixed a bug where GetLiveFiles() output included a non-existent file called "OPTIONS-000000". Backups and checkpoints, which use GetLiveFiles(), failed on DBs impacted by this bug. Read-write DBs were impacted when the latest OPTIONS file failed to write and fail_if_options_file_error == false. Read-only DBs were impacted when no OPTIONS files existed.
  • Handle return code by io_uring_submit_and_wait() and io_uring_wait_cqe().
  • In the IngestExternalFile() API, only try to sync the ingested file if the file is linked and the FileSystem/Env supports reopening a writable file.
  • Fixed a bug that AdvancedColumnFamilyOptions.max_compaction_bytes is under-calculated for manual compaction (CompactRange()). Manual compaction is split to multiple compactions if the compaction size exceed the max_compaction_bytes. The bug creates much larger compaction which size exceed the user setting. On the other hand, larger manual compaction size can increase the subcompaction parallelism, you can tune that by setting max_compaction_bytes.

Behavior Changes

  • Due to the fix of false-postive alert of "SST file is ahead of WAL", all the CFs with no SST file (CF empty) will bypass the consistency check. We fixed a false-positive, but introduced a very rare true-negative which will be triggered in the following conditions: A CF with some delete operations in the last a few queries which will result in an empty CF (those are flushed to SST file and a compaction triggered which combines this file and all other SST files and generates an empty CF, or there is another reason to write a manifest entry for this CF after a flush that generates no SST file from an empty CF). The deletion entries are logged in a WAL and this WAL was corrupted, while the CF's log number points to the next WAL (due to the flush). Therefore, the DB can only recover to the point without these trailing deletions and cause the inconsistent DB status.

New Features

  • Add new option allow_stall passed during instance creation of WriteBufferManager. When allow_stall is set, WriteBufferManager will stall all writers shared across multiple DBs and columns if memory usage goes beyond specified WriteBufferManager::buffer_size (soft limit). Stall will be cleared when memory is freed after flush and memory usage goes down below buffer_size.
  • Allow CompactionFilters to apply in more table file creation scenarios such as flush and recovery. For compatibility, CompactionFilters by default apply during compaction. Users can customize this behavior by overriding CompactionFilterFactory::ShouldFilterTableFileCreation().
  • Added more fields to FilterBuildingContext with LSM details, for custom filter policies that vary behavior based on where they are in the LSM-tree.
  • Added DB::Properties::kBlockCacheEntryStats for querying statistics on what percentage of block cache is used by various kinds of blocks, etc. using DB::GetProperty and DB::GetMapProperty. The same information is now dumped to info LOG periodically according to stats_dump_period_sec.
  • Add an experimental Remote Compaction feature, which allows the user to run Compaction on a different host or process. The feature is still under development, currently only works on some basic use cases. The interface will be changed without backward/forward compatibility support.
  • RocksDB would validate total entries read in flush, and compare with counter inserted into it. If flush_verify_memtable_count = true (default), flush will fail. Otherwise, only log to info logs.
  • Add TableProperties::num_filter_entries, which can be used with TableProperties::filter_size to calculate the effective bits per filter entry (unique user key or prefix) for a table file.

Performance Improvements

  • BlockPrefetcher is used by iterators to prefetch data if they anticipate more data to be used in future. It is enabled implicitly by rocksdb. Added change to take in account read pattern if reads are sequential. This would disable prefetching for random reads in MultiGet and iterators as readahead_size is increased exponential doing large prefetches.

Public API change

  • Removed a parameter from TableFactory::NewTableBuilder, which should not be called by user code because TableBuilder is not a public API.

... (truncated)

Commits
  • 51b5409 Update HISTORY.md and version.h for 6.22.1
  • e8b841e add missing fields to GetLiveFilesMetaData() (#8460)
  • 6ee0acd Revert "Make Comparator into a Customizable Object (#8336)"
  • 6fc2818 Fixup HISTORY.md for 6.22 release (#8441)
  • 5881f4a Fix DeleteFilesInRange may cause inconsistent compaction error (#8434)
  • 28aa6d4 Fix a tsan warning due to reading flags in LRUHandle without holding a mutex ...
  • 9d7de96 Skip c_test and env_test when ASSERT_STATUS_CHECKED=1 (#8430)
  • f64f08e Update HISTORY.md and version.h 6.22 release (#8427)
  • 19a8926 typo: fix typo in db/write_thread's state (#8423)
  • 82a70e1 Trace MultiGet Keys and CF_IDs to the trace file (#8421)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [rocksdbjni](https://github.com/facebook/rocksdb) from 6.19.3 to 6.22.1.
- [Release notes](https://github.com/facebook/rocksdb/releases)
- [Changelog](https://github.com/facebook/rocksdb/blob/v6.22.1/HISTORY.md)
- [Commits](facebook/rocksdb@v6.19.3...v6.22.1)

---
updated-dependencies:
- dependency-name: org.rocksdb:rocksdbjni
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Jul 23, 2021

The following labels could not be found: dependencies.

@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Jul 26, 2021

Superseded by #16.

@dependabot dependabot bot closed this Jul 26, 2021
@dependabot dependabot bot deleted the dependabot/maven/org.rocksdb-rocksdbjni-6.22.1 branch July 26, 2021 17:17
aivinog1 pushed a commit that referenced this pull request Aug 25, 2021
6606: Add compact view of timer records r=npepinpe a=npepinpe

## Description

This PR adds special formatting for timer records when printed as part of the compacted log. It will show the target element ID, the process definition key, either that it is a start event or its element instance key and its process instance key, when it's due (omitting the date part if its the same as the current date), and how many repetitions it has left.

Samples:

```
C TIMER      CREATE      - #15->#14  -1 -  @"timer"[K07] in <process "K01"[K03]> due T10:17:47.856
E TIMER      CREATED     - #6->#3    K3 -  @"timer"[-1]  in <process "K1"[-1]> due T10:18:16.072
E TIMER      CREATED     - #6->#3    K3 -  @"timer"[-1]  in <process "K1"[-1]> due T10:13:11.122 2 reps
```

## Related issues

## Definition of Done

_Not all items need to be done depending on the issue and the pull request._

Code changes:
* [ ] The changes are backwards compatibility with previous versions
* [ ] If it fixes a bug then PRs are created to [backport](https://github.com/zeebe-io/zeebe/compare/stable/0.24...develop?expand=1&template=backport_template.md&title=[Backport%200.24]) the fix to the last two minor versions. You can trigger a backport by assigning labels (e.g. `backport stable/0.25`) to the PR, in case that fails you need to create backports manually.

Testing:
* [ ] There are unit/integration tests that verify all acceptance criterias of the issue
* [ ] New tests are written to ensure backwards compatibility with further versions
* [ ] The behavior is tested manually
* [ ] The change has been verified by a QA run
* [ ] The impact of the changes is verified by a benchmark 

Documentation: 
* [ ] The documentation is updated (e.g. BPMN reference, configuration, examples, get-started guides, etc.)
* [ ] New content is added to the [release announcement](https://drive.google.com/drive/u/0/folders/1DTIeswnEEq-NggJ25rm2BsDjcCQpDape)


Co-authored-by: Nicolas Pépin-Perreault <nicolas.pepin-perreault@camunda.com>
aivinog1 pushed a commit that referenced this pull request Aug 25, 2021
6701: Improve compact log for rejections and error records r=saig0 a=saig0

## Description

* log rejections in the same way as commands and events
* provide more data for root causing a rejection (e.g. value type, intent, position, etc.)
* add summarize for error records
* shrink the terms with `INSTANCE` -> `INST` and `VARIABLE` -> `VAR`
* remove the term `ELEMENT` completely because it is too verbose and redundant (every activate/activating/activated etc. belongs to elements)

<details><summary>Current output</summary>
<p>

```
07:43:02.648 [] INFO  io.zeebe.test - Compact log representation:
--------
	['C'ommand/'E'event/'R'ejection] [valueType] [intent] - #[position]->#[source record position]  P[partitionId]K[key] - [summary of value]
	P9K999 - key; camunda#999 - record position; "ID" element/process id; @"elementid"/[P9K999] - element with ID and key
	Keys are decomposed into partition id and per partition key (e.g. 2251799813685253 -> P1K005). If single partition, the partition is omitted.
	Long IDs are shortened (e.g. 'startEvent_5d56488e-0570-416c-ba2d-36d2a3acea78' -> 'star..acea78'
--------
C DEPLOYMENT             CREATE           - #1-> -1  -1 - process.xml
E PROC                   CREATED          - #2->#1 K01 - process.xml->"process" (version:1)
E DEPLOYMENT             CREATED          - #3->#1 K02 - process.xml
E DEPLOYMENT             FULLY_DISTR      - #4->#1 K02 - 
C MSG                    PUBLISH          - #5-> -1  -1 - "msg1" correlationKey: order-123 (no vars)
E MSG                    PUBLISHED        - #6->#5 K03 - "msg1" correlationKey: order-123 (no vars)
C PROC_INSTANCE_CREATION CREATE           - #7-> -1  -1 - new <process "process"> with variables: {key=order-123}
E VARIABLE               CREATED          - #8->#7 K05 - key->"order-123" in <process [K04]>
E PROC_INSTANCE          ELMNT_ACTIVATING - #9->#7 K04 - PROCESS "process" in <process "process"[K04]>
E PROC_INSTANCE_CREATION CREATED          - #10->#7 K06 - new <process "process"> with variables: {key=order-123}
E PROC_INSTANCE          ELMNT_ACTIVATED  - #11->#9 K04 - PROCESS "process" in <process "process"[K04]>
E PROC_INSTANCE          ELMNT_ACTIVATING - #12->#11 K07 - START_EVENT "startEv..323d1cb" in <process "process"[K04]>
E PROC_INSTANCE          ELMNT_ACTIVATED  - #13->#12 K07 - START_EVENT "startEv..323d1cb" in <process "process"[K04]>
E PROC_INSTANCE          ELMNT_COMPLETING - #14->#13 K07 - START_EVENT "startEv..323d1cb" in <process "process"[K04]>
E PROC_INSTANCE          ELMNT_COMPLETED  - #15->#14 K07 - START_EVENT "startEv..323d1cb" in <process "process"[K04]>
E PROC_INSTANCE          SEQ_FLOW_TAKEN   - #16->#15 K08 - SEQUENCE_FLOW "sequenc..8cfbcaa" in <process "process"[K04]>
C PROC_INSTANCE          ACTIVATE_ELMNT   - #17->#16 K09 - RECEIVE_TASK "task" in <process "process"[K04]>
E PROC_INSTANCE          ELMNT_ACTIVATING - #18->#17 K09 - RECEIVE_TASK "task" in <process "process"[K04]>
E PROC_MSG_SUB           CREATING         - #19->#17  -1 - "taskMsg" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_MSG_SUB           CREATING         - #20->#17  -1 - "msg2" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_MSG_SUB           CREATING         - #21->#17  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_INSTANCE          ELMNT_ACTIVATED  - #22->#17 K09 - RECEIVE_TASK "task" in <process "process"[K04]>
C MSG_SUB                CREATE           - #23-> -1  -1 - "taskMsg" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C MSG_SUB                CREATE           - #24-> -1  -1 - "msg2" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C MSG_SUB                CREATE           - #25-> -1  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E MSG_SUB                CREATED          - #26->#23 K10 - "taskMsg" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_MSG_SUB           CREATE           - #27-> -1  -1 - "taskMsg" (inter.) @[K09] in <process ?> (no vars)
E MSG_SUB                CREATED          - #28->#24 K11 - "msg2" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_MSG_SUB           CREATE           - #29-> -1  -1 - "msg2" (inter.) @[K09] in <process ?> (no vars)
E MSG_SUB                CREATED          - #30->#25 K12 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E MSG_SUB                CORRELATING      - #31->#25 K12 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_MSG_SUB           CORRELATE        - #32-> -1  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_MSG_SUB           CREATED          - #33->#27  -1 - "taskMsg" (inter.) @[K09] in <process ?> (no vars)
E PROC_MSG_SUB           CREATED          - #34->#29  -1 - "msg2" (inter.) @[K09] in <process ?> (no vars)
E PROC_MSG_SUB           CORRELATED       - #35->#32  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_INSTANCE          ACTIVATE_ELMNT   - #36->#32  -1 - RECEIVE_TASK "task" in <process "process"[K04]>
C MSG_SUB                CORRELATE        - #37-> -1  -1 - "msg1" (inter.) @[K09] in <process "process"[K04]> (no vars)
PROCESSING_ERROR Expected to process event 'TypedEventImpl{metadata=RecordMetadata{recordType=COMMAND, intentValue=255, intent=ACTIVATE_ELEMENT, requestStreamId=-2147483648, requestId=-1, protocolVersion=3, valueType=PROCESS_INSTANCE, rejectionType=NULL_VAL, rejectionReason=, brokerVersion=1.0.0}, value={"bpmnProcessId":"process","version":1,"processDefinitionKey":2251799813685249,"processInstanceKey":2251799813685252,"elementId":"task","flowScopeKey":2251799813685252,"bpmnElementType":"RECEIVE_TASK","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1}}' without errors, but exception occurred with message 'Not expected to have an active sequence flow count lower then zero!'.#38->#36  -1 - 
E ERROR                  CREATED          - #39->#36  -1 - ErrorRecord {"stacktrace":"java.lang.IllegalStateException: Not expected to have an active sequence flow count lower then zero!\n\tat io.zeebe.engine.state.instance.ElementInstance.decrementActiveSequenceFlows(ElementInstance.java:168)\n\tat io.zeebe.engine.state.appliers.ProcessInstanceElementActivatingApplier.decrementActiveSequenceFlow(ProcessInstanceElementActivatingApplier.java:108)\n\tat io.zeebe.engine.state.appliers.ProcessInstanceElementActivatingApplier.applyState(ProcessInstanceElementActivatingApplier.java:62)\n\tat io.zeebe.engine.state.appliers.ProcessInstanceElementActivatingApplier.applyState(ProcessInstanceElementActivatingApplier.java:26)\n\tat io.zeebe.engine.state.appliers.EventAppliers.applyState(EventAppliers.java:217)\n\tat io.zeebe.engine.processing.streamprocessor.writers.EventApplyingStateWriter.appendFollowUpEvent(EventApplyingStateWriter.java:49)\n\tat io.zeebe.engine.processing.streamprocessor.writers.EventApplyingStateWriter.appendFollowUpEvent(EventApplyingStateWriter.java:39)\n\tat io.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.transitionTo(BpmnStateTransitionBehavior.java:216)\n\tat io.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.transitionToActivating(BpmnStateTransitionBehavior.java:103)\n\tat io.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processEvent(BpmnStreamProcessor.java:146)\n\tat io.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processRecord(BpmnStreamProcessor.java:134)\n\tat io.zeebe.engine.processing.streamprocessor.TypedRecordProcessor.processRecord(TypedRecordProcessor.java:54)\n\tat io.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.lambda$processInTransaction$3(ProcessingStateMachine.java:296)\n\tat io.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84)\n\tat io.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processInTransaction(ProcessingStateMachine.java:286)\n\tat io.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processEvent(ProcessingStateMachine.java:254)\n\tat io.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.tryToReadNextEvent(ProcessingStateMachine.java:219)\n\tat io.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.readNextEvent(ProcessingStateMachine.java:210)\n\tat io.zeebe.util.sched.ActorJob.invoke(ActorJob.java:73)\n\tat io.zeebe.util.sched.ActorJob.execute(ActorJob.java:39)\n\tat io.zeebe.util.sched.ActorTask.execute(ActorTask.java:122)\n\tat io.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:94)\n\tat io.zeebe.util.sched.ActorThread.doWork(ActorThread.java:78)\n\tat io.zeebe.util.sched.ActorThread.run(ActorThread.java:191)\n","processInstanceKey":2251799813685252,"exceptionMessage":"Not expected to have an active sequence flow count lower then zero!","errorEventPosition":36}
--------
Decomposed keys (for debugging):
 -1 <-> -1
K01 <-> 2251799813685249
K02 <-> 2251799813685250
K03 <-> 2251799813685251
K04 <-> 2251799813685252
K05 <-> 2251799813685253
K06 <-> 2251799813685254
K07 <-> 2251799813685255
K08 <-> 2251799813685256
K09 <-> 2251799813685257
K10 <-> 2251799813685258
K11 <-> 2251799813685259
K12 <-> 2251799813685260
```

<p>
</details>

<details><summary>Output with the changes</summary>
<p>

```
07:38:03.861 [] INFO  io.zeebe.test - Compact log representation:
--------
	['C'ommand/'E'event/'R'ejection] [valueType] [intent] - #[position]->#[source record position]  P[partitionId]K[key] - [summary of value]
	P9K999 - key; camunda#999 - record position; "ID" element/process id; @"elementid"/[P9K999] - element with ID and key
	Keys are decomposed into partition id and per partition key (e.g. 2251799813685253 -> P1K005). If single partition, the partition is omitted.
	Long IDs are shortened (e.g. 'startEvent_5d56488e-0570-416c-ba2d-36d2a3acea78' -> 'star..acea78'
--------
C DEPLOYMENT         CREATE         - #1-> -1  -1 - process.xml
E PROC               CREATED        - #2->#1 K01 - process.xml->"process" (version:1)
E DEPLOYMENT         CREATED        - #3->#1 K02 - process.xml
E DEPLOYMENT         FULLY_DISTR    - #4->#1 K02 - 
C MSG                PUBLISH        - #5-> -1  -1 - "msg1" correlationKey: order-123 (no vars)
E MSG                PUBLISHED      - #6->#5 K03 - "msg1" correlationKey: order-123 (no vars)
C PROC_INST_CREATION CREATE         - #7-> -1  -1 - new <process "process"> with variables: {key=order-123}
E VAR                CREATED        - #8->#7 K05 - key->"order-123" in <process [K04]>
E PROC_INST          ACTIVATING     - #9->#7 K04 - PROCESS "process" in <process "process"[K04]>
E PROC_INST_CREATION CREATED        - #10->#7 K06 - new <process "process"> with variables: {key=order-123}
E PROC_INST          ACTIVATED      - #11->#9 K04 - PROCESS "process" in <process "process"[K04]>
E PROC_INST          ACTIVATING     - #12->#11 K07 - START_EVENT "startEv..b233ada" in <process "process"[K04]>
E PROC_INST          ACTIVATED      - #13->#12 K07 - START_EVENT "startEv..b233ada" in <process "process"[K04]>
E PROC_INST          COMPLETING     - #14->#13 K07 - START_EVENT "startEv..b233ada" in <process "process"[K04]>
E PROC_INST          COMPLETED      - #15->#14 K07 - START_EVENT "startEv..b233ada" in <process "process"[K04]>
E PROC_INST          SEQ_FLOW_TAKEN - #16->#15 K08 - SEQUENCE_FLOW "sequenc..fb40cc7" in <process "process"[K04]>
C PROC_INST          ACTIVATE       - #17->#16 K09 - RECEIVE_TASK "task" in <process "process"[K04]>
E PROC_INST          ACTIVATING     - #18->#17 K09 - RECEIVE_TASK "task" in <process "process"[K04]>
E PROC_MSG_SUB       CREATING       - #19->#17  -1 - "taskMsg" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_MSG_SUB       CREATING       - #20->#17  -1 - "msg2" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_MSG_SUB       CREATING       - #21->#17  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_INST          ACTIVATED      - #22->#17 K09 - RECEIVE_TASK "task" in <process "process"[K04]>
C MSG_SUB            CREATE         - #23-> -1  -1 - "taskMsg" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C MSG_SUB            CREATE         - #24-> -1  -1 - "msg2" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C MSG_SUB            CREATE         - #25-> -1  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E MSG_SUB            CREATED        - #26->#23 K10 - "taskMsg" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_MSG_SUB       CREATE         - #27-> -1  -1 - "taskMsg" (inter.) @[K09] in <process ?[K04]> (no vars)
E MSG_SUB            CREATED        - #28->#24 K11 - "msg2" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_MSG_SUB       CREATE         - #29-> -1  -1 - "msg2" (inter.) @[K09] in <process ?[K04]> (no vars)
E MSG_SUB            CREATED        - #30->#25 K12 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E MSG_SUB            CORRELATING    - #31->#25 K12 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_MSG_SUB       CORRELATE      - #32-> -1  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
E PROC_MSG_SUB       CREATED        - #33->#27  -1 - "taskMsg" (inter.) @[K09] in <process ?[K04]> (no vars)
E PROC_MSG_SUB       CREATED        - #34->#29  -1 - "msg2" (inter.) @[K09] in <process ?[K04]> (no vars)
E PROC_MSG_SUB       CORRELATED     - #35->#32  -1 - "msg1" (inter.) correlationKey: order-123 @[K09] in <process "process"[K04]> (no vars)
C PROC_INST          ACTIVATE       - #36->#32  -1 - RECEIVE_TASK "task" in <process "process"[K04]>
C MSG_SUB            CORRELATE      - #37-> -1  -1 - "msg1" (inter.) @[K09] in <process "process"[K04]> (no vars)
R PROC_INST          ACTIVATE       - #38->#36  -1 - RECEIVE_TASK "task" in <process "process"[K04]> !PROCESSING_ERROR (Expected to process event 'TypedEventImpl{metadata=RecordMetadata{recordType=COMMAND, intentValue=..)
E ERROR              CREATED        - #39->#36  -1 - "Not expected to have an active sequence flow count lower then zero!"  in <process ?[K04]> (java.lang.IllegalStateException: Not expected to have an active sequence flow count lower then zer..)
--------
Decomposed keys (for debugging):
 -1 <-> -1
K01 <-> 2251799813685249
K02 <-> 2251799813685250
K03 <-> 2251799813685251
K04 <-> 2251799813685252
K05 <-> 2251799813685253
K06 <-> 2251799813685254
K07 <-> 2251799813685255
K08 <-> 2251799813685256
K09 <-> 2251799813685257
K10 <-> 2251799813685258
K11 <-> 2251799813685259
K12 <-> 2251799813685260
```

<p>
</details>

_The most changes happened on the last two records._

## Related issues


## Definition of Done

_Not all items need to be done depending on the issue and the pull request._

Code changes:
* [ ] The changes are backwards compatibility with previous versions
* [ ] If it fixes a bug then PRs are created to [backport](https://github.com/zeebe-io/zeebe/compare/stable/0.24...develop?expand=1&template=backport_template.md&title=[Backport%200.24]) the fix to the last two minor versions. You can trigger a backport by assigning labels (e.g. `backport stable/0.25`) to the PR, in case that fails you need to create backports manually.

Testing:
* [ ] There are unit/integration tests that verify all acceptance criterias of the issue
* [ ] New tests are written to ensure backwards compatibility with further versions
* [ ] The behavior is tested manually
* [ ] The change has been verified by a QA run
* [ ] The impact of the changes is verified by a benchmark 

Documentation: 
* [ ] The documentation is updated (e.g. BPMN reference, configuration, examples, get-started guides, etc.)
* [ ] New content is added to the [release announcement](https://drive.google.com/drive/u/0/folders/1DTIeswnEEq-NggJ25rm2BsDjcCQpDape)


Co-authored-by: Philipp Ossler <philipp.ossler@gmail.com>
aivinog1 pushed a commit that referenced this pull request Apr 17, 2022
8843: test(test-util): summarize DMN records r=saig0 a=saig0

## Description

* extend the compact logger that is mainly used to print the records when a test failed
* display DMN resources of deployment record
* summarize decision requirements and decision records
* summarize decision evaluation records

Before:
```
C DEPLOYMENT            CREATE         - #1-> -1  -1 - 
E DECISION_REQUIREMENTS CREATED        - #2->#1 K01 - DecisionRequirementsRecord {"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPG...","decisionRequirementsKey":2251799813685249,"decisionRequirementsId":"force_users","resourceName":"/dmn/drg-force-user.dmn","checksum":"4CVEZphbvJhEHAY8+PoEbw==","decisionRequirementsName":"force_users","decisionRequirementsVersion":1,"namespace":"http://camunda.org/schema/1.0/dmn","duplicate":false}
E DECISION              CREATED        - #3->#1 K02 - DecisionRecord {"version":1,"decisionKey":2251799813685250,"decisionId":"jedi_or_sith","decisionName":"Jedi or Sith","decisionRequirementsKey":2251799813685249,"decisionRequirementsId":"force_users","duplicate":false}
E DECISION              CREATED        - #4->#1 K03 - DecisionRecord {"version":1,"decisionKey":2251799813685251,"decisionId":"force_user","decisionName":"Which force user?","decisionRequirementsKey":2251799813685249,"decisionRequirementsId":"force_users","duplicate":false}
E PROC                  CREATED        - #5->#1 K04 - process.xml -> "process" (version:1)
E DEPLOYMENT            CREATED        - #6->#1 K05 - process.xml
E DEPLOYMENT            FULLY_DISTR    - #7->#1 K05 - 

...

E DECISION_EVALUATION   EVALUATED      - #24->#22 K13 - DecisionEvaluationRecord {"decisionKey":2251799813685251,"decisionId":"force_user","decisionName":"Which force user?","decisionRequirementsKey":2251799813685249,"decisionRequirementsId":"force_users","processDefinitionKey":2251799813685252,"bpmnProcessId":"process","processInstanceKey":2251799813685254,"elementId":"task","evaluatedDecisions":[{"decisionKey":2251799813685250,"decisionId":"jedi_or_sith","decisionName":"Jedi or Sith","evaluatedInputs":[{"inputId":"Input_1","inputName":"Lightsaber color","inputValue":"\"blue\""}],"matchedRules":[{"evaluatedOutputs":[{"outputId":"Output_1","outputName":"jedi_or_sith","outputValue":"\"Jedi\""}],"ruleId":"DecisionRule_0zumznl","ruleIndex":1}],"decisionType":"DECISION_TABLE","decisionOutput":"\"Jedi\""},{"decisionKey":2251799813685251,"decisionId":"force_user","decisionName":"Which force user?","evaluatedInputs":[{"inputId":"InputClause_0qnqj25","inputName":"Jedi or Sith","inputValue":"\"Jedi\""},{"inputId":"InputClause_0k64hys","inputName":"Body height","inputValue":"182"}],"matchedRules":[{"evaluatedOutputs":[{"outputId":"OutputClause_0hhe1yo","outputName":"force_user","outputValue":"\"Obi-Wan Kenobi\""}],"ruleId":"DecisionRule_0uin2hk","ruleIndex":2}],"decisionType":"DECISION_TABLE","decisionOutput":"\"Obi-Wan Kenobi\""}],"decisionOutput":"\"Obi-Wan Kenobi\"","elementInstanceKey":2251799813685260,"decisionVersion":1,"evaluationFailureMessage":"","failedDecisionId":""}
```

After:
```
C DEPLOYMENT         CREATE         - #1-> -1  -1 - 
E DRG                CREATED        - #2->#1 K01 - /dmn/drg-force-user.dmn -> "force_users" (version:1)
E DECISION           CREATED        - #3->#1 K02 - "jedi_or_sith" (version:1) of <drg "force_users"[K01]>
E DECISION           CREATED        - #4->#1 K03 - "force_user" (version:1) of <drg "force_users"[K01]>
E PROC               CREATED        - #5->#1 K04 - process.xml -> "process" (version:1)
E DEPLOYMENT         CREATED        - #6->#1 K05 - process.xml, /dmn/drg-force-user.dmn
E DEPLOYMENT         FULLY_DISTR    - #7->#1 K05 - 

...

E DECISION_EVAL      EVALUATED      - #24->#22 K13 - "Obi-Wan Kenobi" of <decision "force_user"[K03]> in <process "process"[K06]> @"task"[K12]
```

<details>
<summary>Full test output</summary>
<pre>
C DEPLOYMENT         CREATE         - #1-> -1  -1 - 
E DRG                CREATED        - #2->#1 K01 - /dmn/drg-force-user.dmn -> "force_users" (version:1)
E DECISION           CREATED        - #3->#1 K02 - "jedi_or_sith" (version:1) of <drg "force_users"[K01]>
E DECISION           CREATED        - #4->#1 K03 - "force_user" (version:1) of <drg "force_users"[K01]>
E PROC               CREATED        - #5->#1 K04 - process.xml -> "process" (version:1)
E DEPLOYMENT         CREATED        - #6->#1 K05 - process.xml, /dmn/drg-force-user.dmn
E DEPLOYMENT         FULLY_DISTR    - #7->#1 K05 - 
C PROC_INST_CREATION CREATE         - #8-> -1  -1 - new <process "process"> with variables: {lightsaberColor=blue, height=182}
E VAR                CREATED        - #9->#8 K07 - height->182 in <process [K06]>
E VAR                CREATED        - #10->#8 K08 - lightsaberColor->"blue" in <process [K06]>
C PROC_INST          ACTIVATE       - #11->#8 K06 - PROCESS "process" in <process "process"[K06]>
E PROC_INST_CREATION CREATED        - #12->#8 K09 - new <process "process"> with variables: {lightsaberColor=blue, height=182}
E PROC_INST          ACTIVATING     - #13->#11 K06 - PROCESS "process" in <process "process"[K06]>
E PROC_INST          ACTIVATED      - #14->#11 K06 - PROCESS "process" in <process "process"[K06]>
C PROC_INST          ACTIVATE       - #15->#11  -1 - START_EVENT "startEv..29dbbd9" in <process "process"[K06]>
E PROC_INST          ACTIVATING     - #16->#15 K10 - START_EVENT "startEv..29dbbd9" in <process "process"[K06]>
E PROC_INST          ACTIVATED      - #17->#15 K10 - START_EVENT "startEv..29dbbd9" in <process "process"[K06]>
C PROC_INST          COMPLETE       - #18->#15 K10 - START_EVENT "startEv..29dbbd9" in <process "process"[K06]>
E PROC_INST          COMPLETING     - #19->#18 K10 - START_EVENT "startEv..29dbbd9" in <process "process"[K06]>
E PROC_INST          COMPLETED      - #20->#18 K10 - START_EVENT "startEv..29dbbd9" in <process "process"[K06]>
E PROC_INST          SEQ_FLOW_TAKEN - #21->#18 K11 - SEQUENCE_FLOW "sequenc..272f6f6" in <process "process"[K06]>
C PROC_INST          ACTIVATE       - #22->#18 K12 - BUSINESS_RULE_TASK "task" in <process "process"[K06]>
E PROC_INST          ACTIVATING     - #23->#22 K12 - BUSINESS_RULE_TASK "task" in <process "process"[K06]>
E DECISION_EVAL      EVALUATED      - #24->#22 K13 - "Obi-Wan Kenobi" of <decision "force_user"[K03]> in <process "process"[K06]> @"task"[K12]
E PROC_EVNT          TRIGGERING     - #25->#22 K14 -  @"task"[K12] in <process K04[K06]> with variables: {result=Obi-Wan Kenobi}
E PROC_INST          ACTIVATED      - #26->#22 K12 - BUSINESS_RULE_TASK "task" in <process "process"[K06]>
C PROC_INST          COMPLETE       - #27->#22 K12 - BUSINESS_RULE_TASK "task" in <process "process"[K06]>
E PROC_INST          COMPLETING     - #28->#27 K12 - BUSINESS_RULE_TASK "task" in <process "process"[K06]>
E VAR                CREATED        - #29->#27 K15 - result->"Obi-Wan Kenobi" in <process [K06]>
E PROC_INST          COMPLETED      - #30->#27 K12 - BUSINESS_RULE_TASK "task" in <process "process"[K06]>
C PROC_INST          COMPLETE       - #31->#27 K06 - PROCESS "process" in <process "process"[K06]>
E PROC_INST          COMPLETING     - #32->#31 K06 - PROCESS "process" in <process "process"[K06]>
E PROC_INST          COMPLETED      - #33->#31 K06 - PROCESS "process" in <process "process"[K06]>
</pre>
</details>

## Related issues

None.



Co-authored-by: Philipp Ossler <philipp.ossler@gmail.com>
aivinog1 pushed a commit that referenced this pull request Aug 30, 2022
10144: Add more abbreviations to the compact record logger r=saig0 a=korthout

## Description

<!-- Please explain the changes you made here. -->

While writing an engine test, I encountered some more unabbreviated record value types. This adds a few more abbreviations to create a compact, and easier-to-read record log.

### Before

```
--------
	[Partition] ['C'ommand/'E'event/'R'ejection] [valueType] [intent] - #[position]->#[source record position]  P[partitionId]K[key] - [summary of value]
	P9K999 - key; camunda#999 - record position; "ID" element/process id; `@"elementid"/[P9K999]` - element with ID and key
	Keys are decomposed into partition id and per partition key (e.g. 2251799813685253 -> P1K005). If single partition, the partition is omitted.
	Long IDs are shortened (e.g. 'startEvent_5d56488e-0570-416c-ba2d-36d2a3acea78' -> 'star..acea78'
--------
C DPLY                   CREATE            - #1-> -1  -1 - 
E PROC                   CREATED           - #2->#1 K01 - process.xml -> "process" (version:1)
E DPLY                   CREATED           - #3->#1 K02 - process.xml
E DPLY                   FULLY_DISTRIBUTED - #4->#1 K02 - 
C PROC_INST_CREATION     CREATE            - #5-> -1  -1 - new <process "process"> (default start)  with variables: {x=variable}
E VAR                    CREATED           - #6->#5 K04 - x->"variable" in <process [K03]>
C PROC_INST              ACTIVATE          - #7->#5 K03 - PROCESS "process" in <process "process"[K03]>
E PROC_INST_CREATION     CREATED           - #8->#5 K05 - new <process "process"> (default start)  with variables: {x=variable}
E PROC_INST              ACTIVATING        - #9->#7 K03 - PROCESS "process" in <process "process"[K03]>
E PROC_INST              ACTIVATED         - #10->#7 K03 - PROCESS "process" in <process "process"[K03]>
C PROC_INST              ACTIVATE          - #11->#7  -1 - START_EVENT "startEv..b44a7b4" in <process "process"[K03]>
C PROC_INST_MODIFICATION MODIFY            - #12-> -1 K03 - ProcessInstanceModificationRecord {"processInstanceKey":2251799813685251,"terminateInstructions":[],"activateInstructions":[{"elementId":"B","ancestorScopeKey":-1,"variableInstructions":[{"elementId":"","variables":{"x":"updated"}}]}]}
E PROC_INST              ACTIVATING        - #13->#11 K06 - START_EVENT "startEv..b44a7b4" in <process "process"[K03]>
E PROC_INST              ACTIVATED         - #14->#11 K06 - START_EVENT "startEv..b44a7b4" in <process "process"[K03]>
C PROC_INST              COMPLETE          - #15->#11 K06 - START_EVENT "startEv..b44a7b4" in <process "process"[K03]>
E VAR                    UPDATED           - #16->#12 K04 - x->"updated" in <process [K03]>
C PROC_INST              ACTIVATE          - #17->#12 K07 - USER_TASK "B" in <process "process"[K03]>
E PROC_INST_MODIFICATION MODIFIED          - #18->#12 K03 - ProcessInstanceModificationRecord {"processInstanceKey":2251799813685251,"terminateInstructions":[],"activateInstructions":[{"elementId":"B","ancestorScopeKey":-1,"variableInstructions":[{"elementId":"","variables":{"x":"updated"}}]}]}
E PROC_INST              COMPLETING        - #19->#15 K06 - START_EVENT "startEv..b44a7b4" in <process "process"[K03]>
E PROC_INST              COMPLETED         - #20->#15 K06 - START_EVENT "startEv..b44a7b4" in <process "process"[K03]>
E PROC_INST              SEQ_FLOW_TAKEN    - #21->#15 K08 - SEQUENCE_FLOW "sequenc..25183c6" in <process "process"[K03]>
C PROC_INST              ACTIVATE          - #22->#15 K09 - USER_TASK "A" in <process "process"[K03]>
E PROC_INST              ACTIVATING        - #23->#17 K07 - USER_TASK "B" in <process "process"[K03]>
E JOB                    CREATED           - #24->#17 K10 - K10 "io.camunda.zeebe:userTask" `@"B"[K07]` 1 retries, in <process "process"[K03]> (no vars)
E PROC_INST              ACTIVATED         - #25->#17 K07 - USER_TASK "B" in <process "process"[K03]>
E PROC_INST              ACTIVATING        - #26->#22 K09 - USER_TASK "A" in <process "process"[K03]>
E JOB                    CREATED           - #27->#22 K11 - K11 "io.camunda.zeebe:userTask" `@"A"[K09]` 1 retries, in <process "process"[K03]> (no vars)
E PROC_INST              ACTIVATED         - #28->#22 K09 - USER_TASK "A" in <process "process"[K03]>
```

### After

```
--------
	[Partition] ['C'ommand/'E'event/'R'ejection] [valueType] [intent] - #[position]->#[source record position]  P[partitionId]K[key] - [summary of value]
	P9K999 - key; camunda#999 - record position; "ID" element/process id; `@"elementid"/[P9K999]` - element with ID and key
	Keys are decomposed into partition id and per partition key (e.g. 2251799813685253 -> P1K005). If single partition, the partition is omitted.
	Long IDs are shortened (e.g. 'startEvent_5d56488e-0570-416c-ba2d-36d2a3acea78' -> 'star..acea78'
--------
C DPLY CREATE            - #1-> -1  -1 - 
E PROC CREATED           - #2->#1 K01 - process.xml -> "process" (version:1)
E DPLY CREATED           - #3->#1 K02 - process.xml
E DPLY FULLY_DISTRIBUTED - #4->#1 K02 - 
C CREA CREATE            - #5-> -1  -1 - new <process "process"> (default start)  with variables: {x=variable}
E VAR  CREATED           - #6->#5 K04 - x->"variable" in <process [K03]>
C PI   ACTIVATE          - #7->#5 K03 - PROCESS "process" in <process "process"[K03]>
E CREA CREATED           - #8->#5 K05 - new <process "process"> (default start)  with variables: {x=variable}
E PI   ACTIVATING        - #9->#7 K03 - PROCESS "process" in <process "process"[K03]>
E PI   ACTIVATED         - #10->#7 K03 - PROCESS "process" in <process "process"[K03]>
C PI   ACTIVATE          - #11->#7  -1 - START_EVENT "startEv..ea8bb20" in <process "process"[K03]>
C MOD  MODIFY            - #12-> -1 K03 - ProcessInstanceModificationRecord {"processInstanceKey":2251799813685251,"terminateInstructions":[],"activateInstructions":[{"elementId":"B","ancestorScopeKey":-1,"variableInstructions":[{"elementId":"","variables":{"x":"updated"}}]}]}
E PI   ACTIVATING        - #13->#11 K06 - START_EVENT "startEv..ea8bb20" in <process "process"[K03]>
E PI   ACTIVATED         - #14->#11 K06 - START_EVENT "startEv..ea8bb20" in <process "process"[K03]>
C PI   COMPLETE          - #15->#11 K06 - START_EVENT "startEv..ea8bb20" in <process "process"[K03]>
E VAR  UPDATED           - #16->#12 K04 - x->"updated" in <process [K03]>
C PI   ACTIVATE          - #17->#12 K07 - USER_TASK "B" in <process "process"[K03]>
E MOD  MODIFIED          - #18->#12 K03 - ProcessInstanceModificationRecord {"processInstanceKey":2251799813685251,"terminateInstructions":[],"activateInstructions":[{"elementId":"B","ancestorScopeKey":-1,"variableInstructions":[{"elementId":"","variables":{"x":"updated"}}]}]}
E PI   COMPLETING        - #19->#15 K06 - START_EVENT "startEv..ea8bb20" in <process "process"[K03]>
E PI   COMPLETED         - #20->#15 K06 - START_EVENT "startEv..ea8bb20" in <process "process"[K03]>
E PI   SEQ_FLOW_TAKEN    - #21->#15 K08 - SEQUENCE_FLOW "sequenc..d3f792d" in <process "process"[K03]>
C PI   ACTIVATE          - #22->#15 K09 - USER_TASK "A" in <process "process"[K03]>
E PI   ACTIVATING        - #23->#17 K07 - USER_TASK "B" in <process "process"[K03]>
E JOB  CREATED           - #24->#17 K10 - K10 "io.camunda.zeebe:userTask" `@"B"[K07]` 1 retries, in <process "process"[K03]> (no vars)
E PI   ACTIVATED         - #25->#17 K07 - USER_TASK "B" in <process "process"[K03]>
E PI   ACTIVATING        - #26->#22 K09 - USER_TASK "A" in <process "process"[K03]>
E JOB  CREATED           - #27->#22 K11 - K11 "io.camunda.zeebe:userTask" `@"A"[K09]` 1 retries, in <process "process"[K03]> (no vars)
E PI   ACTIVATED         - #28->#22 K09 - USER_TASK "A" in <process "process"[K03]>
```

## Related issues

<!-- Which issues are closed by this PR or are related -->

while working on camunda#9663 



Co-authored-by: Nico Korthout <nico.korthout@camunda.com>
aivinog1 pushed a commit that referenced this pull request May 10, 2023
12697: Fix flaky `EmbeddedSubProcessTest` r=koevskinikola a=remcowesterhoud

## Description

<!-- Please explain the changes you made here. -->
The test was flaky because it could occur that the messages got published before the subscription were opened. As the messages have no TTL this means they were not getting correlated, as expected.

Log of a failing run:
```
C DPLY         CREATE            - #1-> -1  -1 - 
E PROC         CREATED           - #2->#1 K01 - process.xml -> "process..process" (version:1)
E DPLY         CREATED           - #3->#1 K02 - process.xml
E DPLY         FULLY_DISTRIBUTED - #4->#1 K02 - 
C CREA         CREATE            - #5-> -1  -1 - new <process "process..process"> with variables: {correlationKey=correlationKey}
E VAR          CREATED           - #6->#5 K04 - correlationKey->"correlationKey" in <process [K03]>
C PI           ACTIVATE          - #7->#5 K03 - PROCESS "process..process" in <process "process..process"[K03]>
E CREA         CREATED           - #8->#5 K05 - new <process "process..process"> with variables: {correlationKey=correlationKey}
E PI           ACTIVATING        - #9->#7 K03 - PROCESS "process..process" in <process "process..process"[K03]>
E PI           ACTIVATED         - #10->#7 K03 - PROCESS "process..process" in <process "process..process"[K03]>
C PI           ACTIVATE          - #11->#7  -1 - START_EVENT "startEv..8897acc" in <process "process..process"[K03]>
E PI           ACTIVATING        - #12->#11 K06 - START_EVENT "startEv..8897acc" in <process "process..process"[K03]>
E PI           ACTIVATED         - #13->#11 K06 - START_EVENT "startEv..8897acc" in <process "process..process"[K03]>
C PI           COMPLETE          - #14->#11 K06 - START_EVENT "startEv..8897acc" in <process "process..process"[K03]>
E PI           COMPLETING        - #15->#14 K06 - START_EVENT "startEv..8897acc" in <process "process..process"[K03]>
E PROC_MSG_SUB CREATING          - #16->#14 K07 - "eventSubProcess" (inter.) correlationKey: correlationKey `@[K03]` in <process "process..process"[K03]> (no vars)
E PI           COMPLETED         - #17->#14 K06 - START_EVENT "startEv..8897acc" in <process "process..process"[K03]>
E PI           SEQ_FLOW_TAKEN    - #18->#14 K08 - SEQUENCE_FLOW "sequenc..b7cb1a6" in <process "process..process"[K03]>
C PI           ACTIVATE          - #19->#14 K09 - SUB_PROCESS "subProcess" in <process "process..process"[K03]>
E PI           ACTIVATING        - #20->#19 K09 - SUB_PROCESS "subProcess" in <process "process..process"[K03]>
E PI           ACTIVATED         - #21->#19 K09 - SUB_PROCESS "subProcess" in <process "process..process"[K03]>
C PI           ACTIVATE          - #22->#19  -1 - START_EVENT "startEv..8b12a7c" in <process "process..process"[K03]>
E PI           ACTIVATING        - #23->#22 K10 - START_EVENT "startEv..8b12a7c" in <process "process..process"[K03]>
E PI           ACTIVATED         - #24->#22 K10 - START_EVENT "startEv..8b12a7c" in <process "process..process"[K03]>
C PI           COMPLETE          - #25->#22 K10 - START_EVENT "startEv..8b12a7c" in <process "process..process"[K03]>
E PI           COMPLETING        - #26->#25 K10 - START_EVENT "startEv..8b12a7c" in <process "process..process"[K03]>
E PROC_MSG_SUB CREATING          - #27->#25 K11 - "boundary" (inter.) correlationKey: correlationKey `@[K09]` in <process "process..process"[K03]> (no vars)
E PI           COMPLETED         - #28->#25 K10 - START_EVENT "startEv..8b12a7c" in <process "process..process"[K03]>
E PI           SEQ_FLOW_TAKEN    - #29->#25 K12 - SEQUENCE_FLOW "sequenc..8c72ad0" in <process "process..process"[K03]>
C PI           ACTIVATE          - #30->#25 K13 - SERVICE_TASK "task" in <process "process..process"[K03]>
E PI           ACTIVATING        - #31->#30 K13 - SERVICE_TASK "task" in <process "process..process"[K03]>
E JOB          CREATED           - #32->#30 K14 - K14 "task" `@"task"[K13]` 3 retries, in <process "process..process"[K03]> (no vars)
E PI           ACTIVATED         - #33->#30 K13 - SERVICE_TASK "task" in <process "process..process"[K03]>
C MSG          PUBLISH           - #34-> -1 K01 - "boundary" correlationKey: correlationKey (no vars)
C MSG          PUBLISH           - #35-> -1 K01 - "eventSubProcess" correlationKey: correlationKey (no vars)
E MSG          PUBLISHED         - #36->#34 K15 - "boundary" correlationKey: correlationKey (no vars)
E MSG          EXPIRED           - #37->#34 K15 - "boundary" correlationKey: correlationKey (no vars)
E MSG          PUBLISHED         - #38->#35 K16 - "eventSubProcess" correlationKey: correlationKey (no vars)
E MSG          EXPIRED           - #39->#35 K16 - "eventSubProcess" correlationKey: correlationKey (no vars)
C MSG_SUB      CREATE            - #40-> -1  -1 - "eventSubProcess" (inter.) correlationKey: correlationKey `@[K03]` in <process "process..process"[K03]> (no vars)
C MSG_SUB      CREATE            - #41-> -1  -1 - "boundary" (inter.) correlationKey: correlationKey `@[K09]` in <process "process..process"[K03]> (no vars)
E MSG_SUB      CREATED           - #42->#40 K17 - "eventSubProcess" (inter.) correlationKey: correlationKey `@[K03]` in <process "process..process"[K03]> (no vars)
C PROC_MSG_SUB CREATE            - #43-> -1  -1 - "eventSubProcess" (inter.) `@[K03]` in <process ?[K03]> (no vars)
E MSG_SUB      CREATED           - #44->#41 K18 - "boundary" (inter.) correlationKey: correlationKey `@[K09]` in <process "process..process"[K03]> (no vars)
C PROC_MSG_SUB CREATE            - #45-> -1  -1 - "boundary" (inter.) `@[K09]` in <process ?[K03]> (no vars)
E PROC_MSG_SUB CREATED           - #46->#43 K07 - "eventSubProcess" (inter.) correlationKey: correlationKey `@[K03]` in <process "process..process"[K03]> (no vars)
E PROC_MSG_SUB CREATED           - #47->#45 K11 - "boundary" (inter.) correlationKey: correlationKey `@[K09]` in <process "process..process"[K03]> (no vars)
```

The test isn't really flaky on 8.2+ anymore. This is a result of batch processing that was introduced recently. However, I believe it's still good to fix it on all versions and keep the test aligned across versions.

## Related issues

<!-- Which issues are closed by this PR or are related -->

closes camunda#11844 



Co-authored-by: Remco Westerhoud <remco@westerhoud.nl>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants