Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

closedts: bump kv.closed_timestamp.target_duration default #31837

Merged
merged 1 commit into from
Oct 24, 2018

Conversation

nvanbenschoten
Copy link
Member

This change bumps the closed timestamp target duration to 30s. We're not
using closed timestamps, except in special cases in 2.1, so it doesn't make
sense for the setting to be so low if it risks pushing live transactions.

Release note: None

@nvanbenschoten nvanbenschoten requested review from tbg and a team October 24, 2018 21:47
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@nvanbenschoten
Copy link
Member Author

bors r+

@nvanbenschoten
Copy link
Member Author

bors r-

@craig
Copy link
Contributor

craig bot commented Oct 24, 2018

Canceled

This change bumps the closed timestamp target duration to 30s. We're not
using the timestamp except in special cases in 2.1, so it doesn't make
sense for it to be so low.

Release note: None
@nvanbenschoten
Copy link
Member Author

Ok, fixed the lint error and waited for TC.

bors r+

craig bot pushed a commit that referenced this pull request Oct 24, 2018
31829: util: Send only last metric value to Graphite r=nvanbenschoten a=neeral

Cockroach was emitting a ridiculously large number of metrics per minute
because an instance of PrometheusExporter was being reused without
clearing it after every iteration.

Release note:
Fix bug in graphite metrics sender where cockroach was collecting and
sending all data points since startup instead of only the latest data
point.

31837: closedts: bump kv.closed_timestamp.target_duration default r=nvanbenschoten a=nvanbenschoten

This change bumps the closed timestamp target duration to 30s. We're not
using closed timestamps, except in special cases in 2.1, so it doesn't make
sense for the setting to be so low if it risks pushing live transactions.

Release note: None

Co-authored-by: neeral <neeral@users.noreply.github.com>
Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
@craig
Copy link
Contributor

craig bot commented Oct 24, 2018

Build succeeded

@craig craig bot merged commit e2ea965 into cockroachdb:master Oct 24, 2018
@nvanbenschoten nvanbenschoten deleted the nvanbenschoten/ctDefault branch October 28, 2018 01:17
ajwerner added a commit to ajwerner/cockroach that referenced this pull request Dec 18, 2018
…efeed

Before this PR, ranges which emitted min lease applied indices of zero were
ignored. This happened because the logic to track and store MLAIs takes care to
not allow storing indices less than the currently stored value. Go's maps return
zero values when queried for non-existing keys and because it is not the case
that 0 < 0, we never added zero entries to the map.

This PR was motivated by the formerly unstable cdc/rangefeed roachtest.
The author has verified that the formerly flakey cdc rangefeed roachtest now
reliably passes after making minor changes to the test. The change updates the
target steady latency from 1m to 2m to adjust for the increasing of
kv.closedts.target_duration from 5s to 30s (see cockroachdb#31837). It also adds a short
sleep between installing and running the TPCC workload which seems to deflake
some errors which @Danhz had observed. Lastly, it increase the replica_rangefeed
channel buffer size from 512 to 4096 to mitigate errors obserted related to
"buffer capacity exceeded due to slow consumer".

Release note: None
craig bot pushed a commit that referenced this pull request Dec 18, 2018
33243: closedts: fix bug ignoring zero MLIAs to stabilize roachtest cdc/rangedfeed r=ajwerner a=ajwerner

Before this PR, ranges which emitted min lease applied indices of zero were
ignored. This happened because the logic to track and store MLAIs takes care to
not allow storing indices less than the currently stored value. Go's maps return
zero values when queried for non-existing keys and because it is not the case
that 0 < 0, we never added zero entries to the map.

This PR was motivated by the formerly unstable cdc/rangefeed roachtest.
The author has verified that the formerly flakey cdc rangefeed roachtest now
reliably passes after making minor changes to the test. The change updates the
target steady latency from 1m to 2m to adjust for the increasing of
kv.closedts.target_duration from 5s to 30s (see #31837). It also adds a short
sleep between installing and running the TPCC workload which seems to deflake
some errors which @danhhz had observed. Lastly, it increase the replica_rangefeed
channel buffer size from 512 to 4096 to mitigate errors obserted related to
"buffer capacity exceeded due to slow consumer".

Release note: None

Co-authored-by: Andrew Werner <ajwerner@cockroachlabs.com>
changangela pushed a commit to changangela/cockroach that referenced this pull request Dec 19, 2018
…efeed

Before this PR, ranges which emitted min lease applied indices of zero were
ignored. This happened because the logic to track and store MLAIs takes care to
not allow storing indices less than the currently stored value. Go's maps return
zero values when queried for non-existing keys and because it is not the case
that 0 < 0, we never added zero entries to the map.

This PR was motivated by the formerly unstable cdc/rangefeed roachtest.
The author has verified that the formerly flakey cdc rangefeed roachtest now
reliably passes after making minor changes to the test. The change updates the
target steady latency from 1m to 2m to adjust for the increasing of
kv.closedts.target_duration from 5s to 30s (see cockroachdb#31837). It also adds a short
sleep between installing and running the TPCC workload which seems to deflake
some errors which @Danhz had observed. Lastly, it increase the replica_rangefeed
channel buffer size from 512 to 4096 to mitigate errors obserted related to
"buffer capacity exceeded due to slow consumer".

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants