Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Compaction Performance #10300

Merged
merged 22 commits into from
Oct 16, 2018
Merged

Improve Compaction Performance #10300

merged 22 commits into from
Oct 16, 2018

Conversation

e-dard
Copy link
Contributor

@e-dard e-dard commented Sep 24, 2018

This PR adds new batch-oriented tsm1.Encoders that aim to improve the encoding performance of the various TSM blocks (Float, Integer, Unsigned, Time, String and Boolean) with a more efficient implementation of the various encoding schemes.

No encoding/compression behaviour had been altered, and these encoders are compatible with the existing tsm1 format.

By improving the performance of the encoders, the aim is that compactions themseleves will see improved performance.

This PR concerns itself with the following changes:

  • ac90e5f: a batch oriented float encoder (FloatArrayEncodeAll) using Gorilla compression, but avoiding the function call and poor cache effects of using the existing bitstream package. Instead, we take all the float values to encode, and encode them all at once. Benchmarks show an improvement of around 5x - 16x in encoding performance for a 1,000 value block, when compared to the existing encoder implementation. (See a2724e2 for the updated benchmarks)

  • 5a5ffe6: batch oriented integer/unsigned integer encoders (IntegerArrayEncodeAll and UnsignedArrayEncodeAll) using either uncompressed, RLE or simple8b compression. Benchmarks show an improvement of around 1.5x - 1.7x in encoding performance for a 1,000 value block.

  • 5decd99: a batch oriented timestamp encoder (TimeArrayEncodeAll) using either uncompressed, RLE or simple8b compression. Benchmarks show an improvement of around 1.5x - 2.5x in encoding performance for a 1,000 value block. These improvements carry to all blocks, since they all encode timestamps.

  • dd13d79 : a batch oriented string encoder (StringArrayEncodeAll). Benchmarks show an improvement of around 13% – 66% in encoding performance for a 1,000 value block. The original encoder was already quite performant.

  • 0a5eb7a: add block encoders for all types that encode the values and timestamps into the tsm block.

  • 902ba41 add a new key iterator tsmBatchKeyIterator that uses the batch decode and encode functions to compact TSM blocks.

Not implemented

There are several things that have not been implemented in this PR, and could be considered future work:

  • Pooling of the buffers used for the encoding. I put a lot of effort into reusing the provided buffer. However, it's a non trivial amount of work to create the memory pools necessary to reuse those buffers. We need to know for example when the buffer has been safely written to disk so we can mutate the buffer. This could be future work.
  • Cache snapshots. The snapshots do not currently use the new encoders. They could however.
  • Encoding in some offline tooling (though the offline compactor will use these changes).

TODO

I need to add some compaction benchmarks here....

Copy link
Contributor

@benbjohnson benbjohnson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't say I fully grok the bit twiddling but the test suite looks solid. Just a few nits but otherwise lgtm.

v = sigbits << 58 // Move 6 LSB of sigbits to MSB
mask = v >> 56 // Move 6 MSB to 8 LSB
if m <= 2 {
// The 6 bits fir into the current byte.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fir?

input[i] = (rand.Float64() * math.MaxFloat64) - math.MaxFloat32
}

// Example from the paper
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove comment.

input := make([]float64, 1000)
for i := 0; i < len(input); i++ {
input[i] = (rand.Float64() * math.MaxFloat64) - math.MaxFloat32
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would using testing/quick make more sense so you could adjust the number of iterations and control the determinism of the PRNG?

// Since we combined multiple blocks, we could have more values than we should put into
// a single block. We need to chunk them up into groups and re-encode them.
return k.chunk{{.Name}}(nil)
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This else is not needed since you're returning in the previous block.


// Read the next block from each TSM iterator
for i, v := range k.buf {
if len(v) == 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can unindent the code in the block by changing this to:

if len(v) != 0 {
	continue
}

Copy link
Contributor

@stuartcarnie stuartcarnie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking great Edd, nice work!

I pushed a couple more improvements to the integer and string encoders on Friday afternoon. Will be happy to contribute more tests or do whatever we need to verify stability of the array encoders and decoders. I'll follow up with a PR for the Float decoder.

@stuartcarnie stuartcarnie assigned e-dard and unassigned stuartcarnie and e-dard Oct 1, 2018
@e-dard e-dard changed the title [WIP] Improve Compaction Performance Improve Compaction Performance Oct 16, 2018
e-dard and others added 13 commits October 16, 2018 12:05
This commit adds a tsm1 function for encoding a batch of floats into a
buffer. Further, it replaces the `bitstream` library used in the
existing encoders (and all the current decoders) with inlined bit
expressions within the encoder, significantly reducing the function call
overhead for larger batches.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice and a randomly generated input slice.

name                   old time/op    new time/op    delta
EncodeFloats/10_seq      1.14µs ± 3%    0.24µs ± 3%  -78.94%  (p=0.000 n=10+10)
EncodeFloats/10_ran      1.69µs ± 2%    0.21µs ± 3%  -87.43%  (p=0.000 n=10+10)
EncodeFloats/100_seq     7.07µs ± 1%    1.72µs ± 1%  -75.62%  (p=0.000 n=7+9)
EncodeFloats/100_ran     15.8µs ± 4%     1.8µs ± 1%  -88.60%  (p=0.000 n=10+9)
EncodeFloats/1000_seq    50.2µs ± 3%    16.2µs ± 2%  -67.66%  (p=0.000 n=10+10)
EncodeFloats/1000_ran     174µs ± 2%      16µs ± 2%  -90.77%  (p=0.000 n=10+10)

name                   old alloc/op   new alloc/op   delta
EncodeFloats/10_seq       0.00B          0.00B          ~     (all equal)
EncodeFloats/10_ran       0.00B          0.00B          ~     (all equal)
EncodeFloats/100_seq      0.00B          0.00B          ~     (all equal)
EncodeFloats/100_ran      0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_ran     0.00B          0.00B          ~     (all equal)

name                   old allocs/op  new allocs/op  delta
EncodeFloats/10_seq        0.00           0.00          ~     (all equal)
EncodeFloats/10_ran        0.00           0.00          ~     (all equal)
EncodeFloats/100_seq       0.00           0.00          ~     (all equal)
EncodeFloats/100_ran       0.00           0.00          ~     (all equal)
EncodeFloats/1000_seq      0.00           0.00          ~     (all equal)
EncodeFloats/1000_ran      0.00           0.00          ~     (all equal)
This commit adds a tsm1 function for encoding a batch of ints into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice, a randomly generated input slice and a
duplicate slice:

name                     old time/op    new time/op    delta
EncodeIntegers/10_seq       144ns ± 2%      41ns ± 1%   -71.46%  (p=0.000 n=10+10)
EncodeIntegers/10_ran       304ns ± 7%     140ns ± 2%   -53.99%  (p=0.000 n=10+10)
EncodeIntegers/10_dup       147ns ± 4%      41ns ± 2%   -72.14%  (p=0.000 n=10+9)
EncodeIntegers/100_seq      483ns ± 7%     208ns ± 1%   -56.98%  (p=0.000 n=10+9)
EncodeIntegers/100_ran     1.64µs ± 7%    1.01µs ± 1%   -38.42%  (p=0.000 n=9+9)
EncodeIntegers/100_dup      484ns ±14%     210ns ± 2%   -56.63%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq    3.11µs ± 2%    1.81µs ± 2%   -41.68%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran    16.9µs ±10%    11.0µs ± 2%   -34.58%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup    3.05µs ± 3%    1.81µs ± 2%   -40.71%  (p=0.000 n=10+8)

name                     old alloc/op   new alloc/op   delta
EncodeIntegers/10_seq       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_ran       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_dup       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_seq      32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_ran       128B ± 0%        0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_dup      32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq     32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran    1.15kB ± 0%    0.00kB       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup     32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)

name                     old allocs/op  new allocs/op  delta
EncodeIntegers/10_seq        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_ran        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_dup        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_seq       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_ran       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_dup       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
This commit adds a tsm1 function for encoding a batch of timestamps into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice, a randomly generated input slice and a
duplicate slice. All slices are sorted.

name                       old time/op    new time/op    delta
EncodeTimestamps/10_seq       153ns ± 2%     104ns ± 2%  -31.62%  (p=0.000 n=9+10)
EncodeTimestamps/10_ran       191ns ± 2%     142ns ± 0%  -25.73%  (p=0.000 n=10+9)
EncodeTimestamps/10_dup       114ns ± 1%      68ns ± 4%  -39.77%  (p=0.000 n=8+10)
EncodeTimestamps/100_seq      704ns ± 2%     321ns ± 2%  -54.44%  (p=0.000 n=9+9)
EncodeTimestamps/100_ran     7.27µs ± 4%    7.01µs ± 2%   -3.59%  (p=0.000 n=10+10)
EncodeTimestamps/100_dup      756ns ± 3%     396ns ± 2%  -47.57%  (p=0.000 n=10+10)
EncodeTimestamps/1000_seq    6.32µs ± 1%    2.46µs ± 2%  -61.01%  (p=0.000 n=8+10)
EncodeTimestamps/1000_ran     108µs ± 0%      68µs ± 3%  -37.57%  (p=0.000 n=8+10)
EncodeTimestamps/1000_dup    7.26µs ± 1%    3.64µs ± 1%  -49.80%  (p=0.000 n=10+8)

name                       old alloc/op   new alloc/op   delta
EncodeTimestamps/10_seq       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/10_ran       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/10_dup       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_seq      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_ran      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_dup      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_ran     0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_dup     0.00B          0.00B          ~     (all equal)

name                       old allocs/op  new allocs/op  delta
EncodeTimestamps/10_seq        0.00           0.00          ~     (all equal)
EncodeTimestamps/10_ran        0.00           0.00          ~     (all equal)
EncodeTimestamps/10_dup        0.00           0.00          ~     (all equal)
EncodeTimestamps/100_seq       0.00           0.00          ~     (all equal)
EncodeTimestamps/100_ran       0.00           0.00          ~     (all equal)
EncodeTimestamps/100_dup       0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_seq      0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_ran      0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_dup      0.00           0.00          ~     (all equal)
This commit adds a tsm1 function for encoding a batch of strings into a
provided buffer. The new function also shares the buffer between the
input data and the snappy encoded output, reducing allocations.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders using
randomly generated strings.

name                old time/op    new time/op    delta
EncodeStrings/10      2.14µs ± 4%    1.42µs ± 4%   -33.56%  (p=0.000 n=10+10)
EncodeStrings/100     12.7µs ± 3%    10.9µs ± 2%   -14.46%  (p=0.000 n=10+10)
EncodeStrings/1000     132µs ± 2%     114µs ± 2%   -13.88%  (p=0.000 n=10+9)

name                old alloc/op   new alloc/op   delta
EncodeStrings/10        657B ± 0%      704B ± 0%    +7.15%  (p=0.000 n=10+10)
EncodeStrings/100     6.14kB ± 0%    9.47kB ± 0%   +54.14%  (p=0.000 n=10+10)
EncodeStrings/1000    61.4kB ± 0%    90.1kB ± 0%   +46.66%  (p=0.000 n=10+10)

name                old allocs/op  new allocs/op  delta
EncodeStrings/10        3.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeStrings/100       3.00 ± 0%      1.00 ± 0%   -66.67%  (p=0.000 n=10+10)
EncodeStrings/1000      3.00 ± 0%      1.00 ± 0%   -66.67%  (p=0.000 n=10+10)
- Inlined the closure to avoid a function call.
- Changed append(b, make([]byte, 8)...) to inline the make call.
- Check for NaN once at the end assuming NaN is infrequent.

New performance delta comparing the current iterators to the new batch
function:

name                   old time/op    new time/op    delta
EncodeFloats/10_seq      1.32µs ± 2%    0.17µs ± 2%  -87.39%  (p=0.000 n=10+10)
EncodeFloats/10_ran      2.09µs ± 1%    0.15µs ± 0%  -92.97%  (p=0.000 n=10+9)
EncodeFloats/100_seq     8.37µs ± 2%    1.28µs ± 2%  -84.74%  (p=0.000 n=10+10)
EncodeFloats/100_ran     19.1µs ± 1%     1.3µs ± 1%  -93.08%  (p=0.000 n=9+9)
EncodeFloats/1000_seq    60.4µs ± 1%    12.6µs ± 0%  -79.13%  (p=0.000 n=9+7)
EncodeFloats/1000_ran     212µs ± 1%      12µs ± 1%  -94.53%  (p=0.000 n=9+8)

name                   old alloc/op   new alloc/op   delta
EncodeFloats/10_seq       0.00B          0.00B          ~     (all equal)
EncodeFloats/10_ran       0.00B          0.00B          ~     (all equal)
EncodeFloats/100_seq      0.00B          0.00B          ~     (all equal)
EncodeFloats/100_ran      0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_ran     0.00B          0.00B          ~     (all equal)

name                   old allocs/op  new allocs/op  delta
EncodeFloats/10_seq        0.00           0.00          ~     (all equal)
EncodeFloats/10_ran        0.00           0.00          ~     (all equal)
EncodeFloats/100_seq       0.00           0.00          ~     (all equal)
EncodeFloats/100_ran       0.00           0.00          ~     (all equal)
EncodeFloats/1000_seq      0.00           0.00          ~     (all equal)
EncodeFloats/1000_ran      0.00           0.00          ~     (all equal)
This commit adds a tsm1 function for encoding a batch of booleans into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders using
randomly generated sets of booleans.
e-dard and others added 9 commits October 16, 2018 12:05
The batch focussed TSM key iterator iterates TSM blocks, decoding and
merging blocks where appropriate using the the batch focussed
approaches.
simple8b EncodeAll improvements should

```
name                     old time/op  new time/op  delta
EncodeAll/1_bit-8        28.5µs ± 1%  28.6µs ± 1%     ~     (p=0.133 n=9+10)
EncodeAll/2_bits-8       28.9µs ± 2%  28.7µs ± 0%     ~     (p=0.068 n=10+8)
EncodeAll/3_bits-8       29.3µs ± 1%  28.8µs ± 0%   -1.70%  (p=0.000 n=10+10)
EncodeAll/4_bits-8       29.6µs ± 1%  29.1µs ± 1%   -1.85%  (p=0.000 n=10+10)
EncodeAll/5_bits-8       30.6µs ± 1%  29.8µs ± 2%   -2.70%  (p=0.000 n=10+10)
EncodeAll/6_bits-8       31.3µs ± 1%  30.0µs ± 1%   -4.08%  (p=0.000 n=9+9)
EncodeAll/7_bits-8       32.6µs ± 1%  30.8µs ± 0%   -5.49%  (p=0.000 n=9+9)
EncodeAll/8_bits-8       33.6µs ± 2%  31.0µs ± 1%   -7.77%  (p=0.000 n=10+9)
EncodeAll/10_bits-8      34.9µs ± 0%  31.9µs ± 2%   -8.55%  (p=0.000 n=9+10)
EncodeAll/12_bits-8      36.8µs ± 1%  32.6µs ± 1%  -11.35%  (p=0.000 n=9+10)
EncodeAll/15_bits-8      39.8µs ± 1%  34.1µs ± 2%  -14.40%  (p=0.000 n=10+10)
EncodeAll/20_bits-8      45.2µs ± 3%  36.2µs ± 1%  -19.97%  (p=0.000 n=10+9)
EncodeAll/30_bits-8      55.0µs ± 0%  40.9µs ± 1%  -25.62%  (p=0.000 n=9+9)
EncodeAll/60_bits-8      86.2µs ± 1%  55.2µs ± 1%  -35.92%  (p=0.000 n=10+10)
EncodeAll/combination-8   582µs ± 2%   502µs ± 1%  -13.80%  (p=0.000 n=9+9)
```

EncodeIntegers:

```
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    2.04µs ± 0%    1.50µs ± 1%  -26.22%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    8.80µs ± 2%    6.10µs ± 0%  -30.73%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    2.03µs ± 1%    1.50µs ± 1%  -26.04%  (p=0.008 n=5+5)
```

EncodeTimestamps (ran is improved due to simple8b improvements)

```
name                               old time/op    new time/op    delta
EncodeTimestamps/1000_seq/batch-8    2.64µs ± 1%    2.65µs ± 2%     ~     (p=0.310 n=5+5)
EncodeTimestamps/1000_ran/batch-8    64.0µs ± 1%    33.8µs ± 1%  -47.23%  (p=0.008 n=5+5)
EncodeTimestamps/1000_dup/batch-8    9.32µs ± 0%    9.28µs ± 1%     ~     (p=0.087 n=5+5)
```
Timestamp improvements prior to any improvements to simple8b

```
name                               old time/op    new time/op    delta
name                               old time/op    new time/op    delta
EncodeTimestamps/1000_seq/batch-8    2.64µs ± 1%    1.36µs ± 1%  -48.25%  (p=0.008 n=5+5)
EncodeTimestamps/1000_ran/batch-8    64.0µs ± 1%    32.2µs ± 1%  -49.64%  (p=0.008 n=5+5)
EncodeTimestamps/1000_dup/batch-8    9.32µs ± 0%    1.30µs ± 1%  -86.06%  (p=0.008 n=5+5)
```
simple8b encodes deltas[1:], thus deltas[0] >= simple8b.MaxValue is
invalid.

Also changed loop calculating deltas, RLE and max to be similar to
batch timestamp, for greater consistency.

Improvements over previous commit:

```
name                             old time/op    new time/op    delta
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    1.50µs ± 1%    1.48µs ± 1%  -1.40%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    6.10µs ± 0%    5.69µs ± 2%  -6.58%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    1.50µs ± 1%    1.49µs ± 0%  -1.21%  (p=0.008 n=5+5)
```

Improvements overall:

```
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    2.04µs ± 0%    1.48µs ± 1%  -27.25%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    8.80µs ± 2%    5.69µs ± 2%  -35.29%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    2.03µs ± 1%    1.49µs ± 0%  -26.93%  (p=0.008 n=5+5)
```
Encode the compressed data at the start internal buffer. This ensures
the returned slice maintains the entire capacity and is available for
subsequent use.

When we pool / reuse string buffers, this will help considerably.

Improvements over previous commit:

```
name                        old time/op    new time/op    delta
EncodeStrings/10/batch-8       542ns ± 1%     355ns ± 2%   -34.53%  (p=0.008 n=5+5)
EncodeStrings/100/batch-8     5.29µs ± 1%    3.58µs ± 2%   -32.20%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8    48.6µs ± 0%    36.2µs ± 2%   -25.40%  (p=0.008 n=5+5)

name                        old alloc/op   new alloc/op   delta
EncodeStrings/10/batch-8        704B ± 0%        0B       -100.00%  (p=0.008 n=5+5)
EncodeStrings/100/batch-8     9.47kB ± 0%    0.00kB       -100.00%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8    90.1kB ± 0%     0.0kB       -100.00%  (p=0.008 n=5+5)

name                        old allocs/op  new allocs/op  delta
EncodeStrings/10/batch-8        0.00           0.00           ~     (all equal)
EncodeStrings/100/batch-8       1.00 ± 0%      0.00       -100.00%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8      1.00 ± 0%      0.00       -100.00%  (p=0.008 n=5+5)
```
@e-dard e-dard force-pushed the er-batch-encoders branch from 1be7af6 to 5054d6f Compare October 16, 2018 12:38
@e-dard e-dard merged commit 0857934 into master Oct 16, 2018
@ghost ghost removed the review label Oct 16, 2018
@e-dard e-dard deleted the er-batch-encoders branch January 7, 2019 11:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants