Skip to content

add pre-commit hook & convenience script to add it. fix some vet errors. #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 14, 2014
Merged

Conversation

andybons
Copy link
Contributor

The precommit will run

  • go vet
  • all package tests
  • gofmt

Within the cockroach dir, just run the initrepo.sh script and it will be enabled for you.

#!/usr/bin/env bash

# Exit on error.
set -e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit set -x as well for diagnostic output

@strangemonad
Copy link
Contributor

LG with a few comments and go questions

@andybons
Copy link
Contributor Author

Updated.

@strangemonad
Copy link
Contributor

LG

spencerkimball added a commit that referenced this pull request Feb 14, 2014
add pre-commit hook & convenience script to add it. fix some vet errors.
@spencerkimball spencerkimball merged commit c0bc483 into cockroachdb:master Feb 14, 2014
@andybons andybons deleted the andybons_govet branch February 15, 2014 19:20
tamird added a commit that referenced this pull request Jul 17, 2015
knz referenced this pull request in knz/cockroach Apr 25, 2016
Specialcase placeholders in casts during the first pass.
@tamird tamird mentioned this pull request May 27, 2016
craig bot pushed a commit that referenced this pull request Jul 12, 2023
106502: batcheval: delete flaky integration test r=aliher1911 a=aliher1911

TestLeaseCommandLearnerReplica is fundamentally flaky as commit can't retry all underlying replication failures and acceptable failure causes are not exposed on the higher levels where test operates.

What test is supposed to verify that failures to propose with invalid lease are not causing pipelined writes to fail but are retried. More details described in original PR #35261 where this test is introduced.
In absence of direct controls of that, test is trying to move leases between replicas, but this action could cause different failures to happen which can't be filtered out. For example:
`failed to repropose 0b9a11f4bef0e3d3 at idx 16 with new lease index: [NotLeaseHolderError] ‹reproposal failed due to closed timestamp›`
and 
`stale proposal: command was proposed under lease #2 but is being applied under lease: ...`
While closed timestamp could be mitigated by changing settings, second one is opaque to the caller and hard to filter out.

Release note: None
Fixes #104716

Co-authored-by: Oleg Afanasyev <oleg@cockroachlabs.com>
stevendanna added a commit to msbutler/cockroach that referenced this pull request Mar 5, 2024
annrpom pushed a commit to annrpom/cockroach that referenced this pull request Feb 26, 2025
kvoli added a commit to sumeerbhola/cockroach that referenced this pull request Mar 20, 2025
The bug caused change replicas operations never to be marked as complete
in the `Controller`, causing the mma store rebalancers to stall after
their first change replicas.

Epic: none
Release note: None
wenyihu6 pushed a commit to sumeerbhola/cockroach that referenced this pull request Apr 2, 2025
The bug caused change replicas operations never to be marked as complete
in the `Controller`, causing the mma store rebalancers to stall after
their first change replicas.

Epic: none
Release note: None
craig bot pushed a commit that referenced this pull request Jun 17, 2025
When a cluster's logs contain a raft panic, it will be extended (by a week),
volume snapshots will be taken, and the cluster will not be destroyed. This
gives us the artifacts for a thorough investigation.

Verified manually via:

```
run --local acceptance/invariant-check-detection/failed=true
```

Here is the (editorialized) output:

```
test-teardown: 2025/05/20 08:15:15 cluster.go:2559: running cmd `([ -d logs ] && grep -RE '^...` on nodes [:1-4]; details in run_081515.744363000_n1-4_d-logs-grep-RE-Fraft.log
test-teardown: 2025/05/20 08:15:16 cluster.go:2995: extending cluster by 168h0m0s
test-teardown: 2025/05/20 08:15:16 cluster.go:1104: saving cluster local [tag:] (4 nodes) for debugging (--debug specified)
test-teardown: 2025/05/20 08:15:16 test_impl.go:478: test failure #2: full stack retained in failure_2.log: (test_runner.go:1705).maybeSaveClusterDueToInvariantProblems: invariant problem - snap name invariant-problem-local-8897676895823393049:
logs/foo.log:F250502 11:37:20.387424 1036 raft/raft.go:2411 ⋮ [T1,Vsystem,n1,s1,r155/1:?/Table/113/1/{43/578…-51/201…}?] 80 match(30115) is out of range [lastIndex(30114)]. Was the raft log corrupted, truncated, or lost?
```

Closes #145953.
Informs #146617.
Informs #138028.

Epic: none
craig bot pushed a commit that referenced this pull request Jun 17, 2025
146990: roachtest: unconditionally save clusters that show raft fatal errors r=tbg a=tbg

When a cluster's logs contain a raft panic, it will be extended (by a week),
volume snapshots will be taken, and the cluster will not be destroyed. This
gives us the artifacts for a thorough investigation.

Verified manually via:

```
run --local acceptance/invariant-check-detection/failed=true
```

Here is the (editorialized) output:

```
test-teardown: 2025/05/20 08:15:15 cluster.go:2559: running cmd `([ -d logs ] && grep -RE '^...` on nodes [:1-4]; details in run_081515.744363000_n1-4_d-logs-grep-RE-Fraft.log
test-teardown: 2025/05/20 08:15:16 cluster.go:2995: extending cluster by 168h0m0s
test-teardown: 2025/05/20 08:15:16 cluster.go:1104: saving cluster local [tag:] (4 nodes) for debugging (--debug specified)
test-teardown: 2025/05/20 08:15:16 test_impl.go:478: test failure #2: full stack retained in failure_2.log: (test_runner.go:1705).maybeSaveClusterDueToInvariantProblems: invariant problem - snap name invariant-problem-local-8897676895823393049:
logs/foo.log:F250502 11:37:20.387424 1036 raft/raft.go:2411 ⋮ [T1,Vsystem,n1,s1,r155/1:?/Table/113/1/{43/578…-51/201…}?] 80 match(30115) is out of range [lastIndex(30114)]. Was the raft log corrupted, truncated, or lost?
```

Closes #145953.
Informs #146617.
Informs #138028.

Fixes #146355.

Epic: none

Co-authored-by: Tobias Grieger <tobias.b.grieger@gmail.com>
craig bot pushed a commit that referenced this pull request Jun 18, 2025
146990: roachtest: unconditionally save clusters that show raft fatal errors r=tbg a=tbg

When a cluster's logs contain a raft panic, it will be extended (by a week),
volume snapshots will be taken, and the cluster will not be destroyed. This
gives us the artifacts for a thorough investigation.

Verified manually via:

```
run --local acceptance/invariant-check-detection/failed=true
```

Here is the (editorialized) output:

```
test-teardown: 2025/05/20 08:15:15 cluster.go:2559: running cmd `([ -d logs ] && grep -RE '^...` on nodes [:1-4]; details in run_081515.744363000_n1-4_d-logs-grep-RE-Fraft.log
test-teardown: 2025/05/20 08:15:16 cluster.go:2995: extending cluster by 168h0m0s
test-teardown: 2025/05/20 08:15:16 cluster.go:1104: saving cluster local [tag:] (4 nodes) for debugging (--debug specified)
test-teardown: 2025/05/20 08:15:16 test_impl.go:478: test failure #2: full stack retained in failure_2.log: (test_runner.go:1705).maybeSaveClusterDueToInvariantProblems: invariant problem - snap name invariant-problem-local-8897676895823393049:
logs/foo.log:F250502 11:37:20.387424 1036 raft/raft.go:2411 ⋮ [T1,Vsystem,n1,s1,r155/1:?/Table/113/1/{43/578…-51/201…}?] 80 match(30115) is out of range [lastIndex(30114)]. Was the raft log corrupted, truncated, or lost?
```

Closes #145953.
Informs #146617.
Informs #138028.

Fixes #146355.

Epic: none

147683: pkg/util/log: parse otlp sink from yaml config r=TheComputerM a=TheComputerM

OpenTelemetry is now an industry standard for o11y and is
more efficient than other log sinks currently available.

This commit only defines basic configuration options for
the OTLP sink, like address, insecure, and compression,
and adds logic to parse them from the YAML config. The
actual sink implementation will follow in a future commit.

Informs: #143049

Release note: None

Co-authored-by: Tobias Grieger <tobias.b.grieger@gmail.com>
Co-authored-by: Mudit Somani <mudit.somani@cockroachlabs.com>
craig bot pushed a commit that referenced this pull request Jun 18, 2025
146990: roachtest: unconditionally save clusters that show raft fatal errors r=tbg a=tbg

When a cluster's logs contain a raft panic, it will be extended (by a week),
volume snapshots will be taken, and the cluster will not be destroyed. This
gives us the artifacts for a thorough investigation.

Verified manually via:

```
run --local acceptance/invariant-check-detection/failed=true
```

Here is the (editorialized) output:

```
test-teardown: 2025/05/20 08:15:15 cluster.go:2559: running cmd `([ -d logs ] && grep -RE '^...` on nodes [:1-4]; details in run_081515.744363000_n1-4_d-logs-grep-RE-Fraft.log
test-teardown: 2025/05/20 08:15:16 cluster.go:2995: extending cluster by 168h0m0s
test-teardown: 2025/05/20 08:15:16 cluster.go:1104: saving cluster local [tag:] (4 nodes) for debugging (--debug specified)
test-teardown: 2025/05/20 08:15:16 test_impl.go:478: test failure #2: full stack retained in failure_2.log: (test_runner.go:1705).maybeSaveClusterDueToInvariantProblems: invariant problem - snap name invariant-problem-local-8897676895823393049:
logs/foo.log:F250502 11:37:20.387424 1036 raft/raft.go:2411 ⋮ [T1,Vsystem,n1,s1,r155/1:?/Table/113/1/{43/578…-51/201…}?] 80 match(30115) is out of range [lastIndex(30114)]. Was the raft log corrupted, truncated, or lost?
```

Closes #145953.
Informs #146617.
Informs #138028.

Fixes #146355.

Epic: none

147683: pkg/util/log: parse otlp sink from yaml config r=TheComputerM a=TheComputerM

OpenTelemetry is now an industry standard for o11y and is
more efficient than other log sinks currently available.

This commit only defines basic configuration options for
the OTLP sink, like address, insecure, and compression,
and adds logic to parse them from the YAML config. The
actual sink implementation will follow in a future commit.

Informs: #143049

Release note: None

Co-authored-by: Tobias Grieger <tobias.b.grieger@gmail.com>
Co-authored-by: Mudit Somani <mudit.somani@cockroachlabs.com>
wenyihu6 pushed a commit to sumeerbhola/cockroach that referenced this pull request Jun 19, 2025
The bug caused change replicas operations never to be marked as complete
in the `Controller`, causing the mma store rebalancers to stall after
their first change replicas.

Epic: none
Release note: None
TheComputerM pushed a commit to TheComputerM/cockroach that referenced this pull request Jun 19, 2025
When a cluster's logs contain a raft panic, it will be extended (by a week),
volume snapshots will be taken, and the cluster will not be destroyed. This
gives us the artifacts for a thorough investigation.

Verified manually via:

```
run --local acceptance/invariant-check-detection/failed=true
```

Here is the (editorialized) output:

```
test-teardown: 2025/05/20 08:15:15 cluster.go:2559: running cmd `([ -d logs ] && grep -RE '^...` on nodes [:1-4]; details in run_081515.744363000_n1-4_d-logs-grep-RE-Fraft.log
test-teardown: 2025/05/20 08:15:16 cluster.go:2995: extending cluster by 168h0m0s
test-teardown: 2025/05/20 08:15:16 cluster.go:1104: saving cluster local [tag:] (4 nodes) for debugging (--debug specified)
test-teardown: 2025/05/20 08:15:16 test_impl.go:478: test failure cockroachdb#2: full stack retained in failure_2.log: (test_runner.go:1705).maybeSaveClusterDueToInvariantProblems: invariant problem - snap name invariant-problem-local-8897676895823393049:
logs/foo.log:F250502 11:37:20.387424 1036 raft/raft.go:2411 ⋮ [T1,Vsystem,n1,s1,r155/1:?/Table/113/1/{43/578…-51/201…}?] 80 match(30115) is out of range [lastIndex(30114)]. Was the raft log corrupted, truncated, or lost?
```

Closes cockroachdb#145953.
Informs cockroachdb#146617.
Informs cockroachdb#138028.

Epic: none
wenyihu6 pushed a commit to sumeerbhola/cockroach that referenced this pull request Jul 3, 2025
The bug caused change replicas operations never to be marked as complete
in the `Controller`, causing the mma store rebalancers to stall after
their first change replicas.

Epic: none
Release note: None
nameisbhaskar pushed a commit to nameisbhaskar/cockroach that referenced this pull request Jul 12, 2025
Looks like the movr series can take a while to initialize on
TestDockerCLI, so starting an empty database instead. This would still
have caught the original regression.

Release note: None
nameisbhaskar pushed a commit to nameisbhaskar/cockroach that referenced this pull request Jul 12, 2025
…-58687

release-20.2: cli: deflake test_demo_global.tcl (attempt cockroachdb#2)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants