Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update references to old "master" branch to "main" #411

Merged
merged 3 commits into from
Feb 17, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ NOTE: FreeBSD builds have not been available since v0.6.0 due to a
cross-compilation issue. The issue for tracking adding support back
can be found at [#317](https://github.com/grafana/agent/issues/317).

# Master (unreleased)
# Main (unreleased)

- [ENHANCEMENT] Support other architectures in installation script. (@rfratto)

Expand Down Expand Up @@ -329,7 +329,7 @@ files to the new format.

# v0.5.0 (2020-08-12)

- [FEATURE] A [scrape targets API](https://github.com/grafana/agent/blob/master/docs/api.md#list-current-scrape-targets)
- [FEATURE] A [scrape targets API](https://github.com/grafana/agent/blob/main/docs/api.md#list-current-scrape-targets)
has been added to show every target the Agent is currently scraping, when it
was last scraped, how long it took to scrape, and errors from the last scrape,
if any. (@rfratto)
Expand Down
79 changes: 45 additions & 34 deletions docs/maintaining.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,21 +3,32 @@
This document provides relevant instructions for maintainers of the Grafana
Cloud Agent.

## Master Branch Rename

The `master` branch was renamed to `main` on 17 Feb 2021. If you have already
checked out the repository, you will need to update your local environment:

```
git branch -m master main
git fetch origin
git branch -u origin/main main
```

## Releasing

### Prerequisites

Each maintainer performing a release should perform the following steps once
before releasing the Grafana Cloud Agent.

#### Prerelease testing
#### Prerelease testing

For testing a release, run the [K3d example](../example/k3d/README.md) locally.
Let it run for about 90 minutes, keeping an occasional eye on the Agent
Operational dashboard (noting that metrics from the scraping service will take
time to show up). After 90 minutes, if nothing has crashed and you see metrics
for both the scraping service and the non-scraping service, the Agent is ready
for release.
for release.

#### Add Existing GPG Key to GitHub

Expand Down Expand Up @@ -58,7 +69,7 @@ export GPG_TTY=$(tty)
1. Create a new branch to update `CHANGELOG.md` and references to version
numbers across the entire repository (e.g. README.md in the project root).
2. Modify `CHANGELOG.md` with the new version number and its release date.
3. Add a new section in `CHANGELOG.md` for `Master (unreleased)`.
3. Add a new section in `CHANGELOG.md` for `Main (unreleased)`.
4. Go through the entire repository and find references to the previous release
version, updating them to reference the new version.
5. Run `make example-kubernetes` and `make example-dashboards` to update
Expand All @@ -74,7 +85,7 @@ export GPG_TTY=$(tty)

```bash
RELEASE=v1.2.3 # UPDATE ME to reference new release
git checkout master # If not already on master
git checkout main # If not already on main
git pull
git tag -s $RELEASE -m "release $RELEASE"
git push origin $RELEASE
Expand All @@ -90,7 +101,7 @@ After this final set of steps, you can publish your draft!
2. Edit the drafted release, copying and pasting *notable changes* from the
CHANGELOG. Add a link to the CHANGELOG, noting that the full list of changes
can be found there. Refer to other releases for help with formatting this.
3. Optionally, have other team members review the release draft if you wish
3. Optionally, have other team members review the release draft if you wish
to feel more comfortable with it.
4. Publish the release!

Expand All @@ -107,14 +118,14 @@ the latest tag) and pushing it back upstream.

## `grafana/prometheus` Maintenance

Grafana Labs includes the Agent as part of their internal monitoring, running it
alongside Prometheus. This gives an opportunity to utilize the Agent to
proof-of-concept additions to Prometheus before they get moved upstream. A
`grafana/prometheus` repository maintained by Grafana Labs holds non-trivial
and experimental changes. Having this repository allows for experimental features to
be vendored into the Agent and enables faster development iteration. Ideally,
this experimental testing can help serve as evidence towards usefulness and
correctness when the feature becomes proposed upstream.
Grafana Labs includes the Agent as part of their internal monitoring, running it
alongside Prometheus. This gives an opportunity to utilize the Agent to
proof-of-concept additions to Prometheus before they get moved upstream. A
`grafana/prometheus` repository maintained by Grafana Labs holds non-trivial
and experimental changes. Having this repository allows for experimental features to
be vendored into the Agent and enables faster development iteration. Ideally,
this experimental testing can help serve as evidence towards usefulness and
correctness when the feature becomes proposed upstream.

We are committing ourselves to doing the following:

Expand All @@ -126,7 +137,7 @@ We are committing ourselves to doing the following:
stable release; we want the Agent's Prometheus roots to be stable.
3. Reduce code drift: The code the Agent uses on top of Prometheus will be
layered on top of a Prometheus release rather than sandwiched in between.
4. Keep the number of experimental changes not merged upstream to a minimum. We're
4. Keep the number of experimental changes not merged upstream to a minimum. We're
not trying to fork Prometheus.

Maintenance of the `grafana/prometheus` repository revolves around feature
Expand All @@ -137,7 +148,7 @@ version as the `prometheus/prometheus` release it is based off of.
By adding features to the `grafana/prometheus` repository first, we are
committing ourselves to extra maintenance of features that have not yet been
merged upstream. Feature authors will have to babysit their features to
coordinate with the Prometheus release schedule to always be compatible. Maintenance
coordinate with the Prometheus release schedule to always be compatible. Maintenance
burden becomes lightened once each feature is upstreamed as breaking changes will
no longer happen out of sync with upstream changes for the respective upstreamed
feature.
Expand All @@ -148,9 +159,9 @@ to strive to benefit the Prometheus ecosystem at large.

### Creating a New Feature

Grafana Labs developers should try to get all features upstreamed *first*. If
Grafana Labs developers should try to get all features upstreamed *first*. If
it's clear the feature is experimental or more unproven than the upstream team
is comfortable with, developers should then create a downstream
is comfortable with, developers should then create a downstream
`grafana/prometheus` feature branch.

For `grafana/prometheus` maintainers to create a new feature, they will do the
Expand All @@ -173,7 +184,7 @@ updated for any reason:
feature branch.
2. Open a PR to merge the new changes from the feature branch into the
associated release branch.
3. After updating the release branch, open a PR to update `grafana/agent` by
3. After updating the release branch, open a PR to update `grafana/agent` by
vendoring the changes using the latest release branch SHA.

### Handling New Upstream Release
Expand All @@ -184,46 +195,46 @@ through the following process:
1. Create a new `grafana/prometheus` release branch named
`release-X.Y.Z-grafana`.
2. For all feature branches still not merged upstream, rebase them on top of the
newly created branch. Force push them to update the `grafana/prometheus`
newly created branch. Force push them to update the `grafana/prometheus`
feature branch.
3. Create one or more PRs to introduce the features into the newly created
release branch.

Once a new release branch has been created, the previous release branch in
Once a new release branch has been created, the previous release branch in
`grafana/prometheus` is considered stale and will no longer receive updates.

### Updating the Agent's vendor

The easiest way to do this is the following:

1. Edit `go.mod` and change the replace directive to the release branch name.
2. Update `README.md` in the Agent to change which version of Prometheus
2. Update `README.md` in the Agent to change which version of Prometheus
the Agent is vendoring.
2. Run `go mod tidy && go mod vendor`.
3. Commit and open a PR.

### Gotchas

If the `grafana/prometheus` feature is incompatible with the upstream
If the `grafana/prometheus` feature is incompatible with the upstream
`prometheus/prometheus` master branch, merge conflicts would prevent
an upstream PR from being merged. There are a few ways this can be handled
an upstream PR from being merged. There are a few ways this can be handled
at the feature author's discretion:

When this happens, downstream feature branch maintainers should wait until
a new `prometheus/prometheus` release is available and rebase their feature
branch on top of the latest release. This will make the upstream PR compatible
with the master branch, though the window of compatibility is unpredictable
and may change at any time.
When this happens, downstream feature branch maintainers should wait until
a new `prometheus/prometheus` release is available and rebase their feature
branch on top of the latest release. This will make the upstream PR compatible
with the master branch, though the window of compatibility is unpredictable
and may change at any time.

If it proves unfeasible to get a feature branch merged upstream within the
If it proves unfeasible to get a feature branch merged upstream within the
"window of upstream compatibility," feature branch maintainers should create
a fork of their branch that is based off of master and use that master-compatible
branch for the upstream PR. Note that this means any changes made to the feature
a fork of their branch that is based off of master and use that master-compatible
branch for the upstream PR. Note that this means any changes made to the feature
branch will now have to be mirrored to the master-compatible branch.

### Open Questions

If two feature branches depend on one another, a combined feature branch
(like an "epic" branch) should be created where development of interrelated
features go. All features within this category go directly to the combined
If two feature branches depend on one another, a combined feature branch
(like an "epic" branch) should be created where development of interrelated
features go. All features within this category go directly to the combined
"epic" branch rather than individual branches.
6 changes: 3 additions & 3 deletions docs/operation-guide.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Operation Guide

## Host Filtering
## Host Filtering

Host Filtering implements a form of "dumb sharding," where operators may deploy
one Grafana Cloud Agent instance per machine in a cluster, all using the same
configuration, and the Grafana Cloud Agents will only scrape targets that are
running on the same node as the Agent.
running on the same node as the Agent.

Running with `host_filter: true` means that if you have a target whose host
machine is not also running a Grafana Cloud Agent process, _that target will not
Expand Down Expand Up @@ -51,7 +51,7 @@ logic; only `host_filter_relabel_configs` will work.
If the determined hostname matches any of the meta labels, the discovered target
is allowed. Otherwise, the target is ignored, and will not show up in the
[targets
API](https://github.com/grafana/agent/blob/master/docs/api.md#list-current-scrape-targets).
API](https://github.com/grafana/agent/blob/main/docs/api.md#list-current-scrape-targets).

## Prometheus "Instances"

Expand Down
2 changes: 1 addition & 1 deletion packaging/deb/grafana-agent.service
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[Unit]
Description=Monitoring system and forwarder
Documentation=https://github.com/grafana/agent/blob/master/docs/README.md
Documentation=https://github.com/grafana/agent/blob/main/docs/README.md
Wants=network-online.target
After=network-online.target

Expand Down
2 changes: 1 addition & 1 deletion packaging/grafana-agent.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Sample config for Grafana Agent
# For a full configuration reference, see: https://github.com/grafana/agent/blob/master/docs/configuration-reference.md.
# For a full configuration reference, see: https://github.com/grafana/agent/blob/main/docs/configuration-reference.md.
server:
http_listen_address: '127.0.0.1'
http_listen_port: 9090
Expand Down
2 changes: 1 addition & 1 deletion packaging/rpm/grafana-agent.service
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[Unit]
Description=Monitoring system and forwarder
Documentation=https://github.com/grafana/agent/blob/master/docs/README.md
Documentation=https://github.com/grafana/agent/blob/main/docs/README.md
Wants=network-online.target
After=network-online.target

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ local container = k.core.v1.container;
// withIntegrations controls the integrations component of the Agent.
//
// For the full list of options, refer to the configuration reference:
// https://github.com/grafana/agent/blob/master/docs/configuration-reference.md#integrations_config
// https://github.com/grafana/agent/blob/main/docs/configuration-reference.md#integrations_config
withIntegrations(integrations):: {
assert std.objectHasAll(self, '_mode') : |||
withLokiConfig must be merged with the result of calling new.
Expand Down
2 changes: 1 addition & 1 deletion production/tanka/grafana-agent/v1/lib/loki.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ local container = k.core.v1.container;
// withLokiConfig adds a Loki config to collect logs.
//
// For the full list of options, refer to the configuration reference:
// https://github.com/grafana/agent/blob/master/docs/configuration-reference.md#loki_config
// https://github.com/grafana/agent/blob/main/docs/configuration-reference.md#loki_config
withLokiConfig(config):: {
assert std.objectHasAll(self, '_mode') : |||
withLokiConfig must be merged with the result of calling new.
Expand Down
10 changes: 5 additions & 5 deletions production/tanka/grafana-agent/v1/lib/prometheus.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ local scrape_k8s = import '../internal/kubernetes_instance.libsonnet';
// values for scrape configs and remote_write. For detailed information on
// instance config settings, consult the Agent documentation:
//
// https://github.com/grafana/agent/blob/master/docs/configuration-reference.md#prometheus_instance_config
// https://github.com/grafana/agent/blob/main/docs/configuration-reference.md#prometheus_instance_config
//
// host_filter does not need to be applied here; the library will apply it
// automatically based on how the Agent is being deployed.
Expand Down Expand Up @@ -81,7 +81,7 @@ local scrape_k8s = import '../internal/kubernetes_instance.libsonnet';
if !std.objectHas(inst, 'remote_write') || !std.isArray(inst.remote_write)
then []
else inst.remote_write,
}, list)
}, list),
},

// withRemoteWrite overwrites all the remote_write configs provided in
Expand All @@ -90,13 +90,13 @@ local scrape_k8s = import '../internal/kubernetes_instance.libsonnet';
// to remote_write to the same place.
//
// Refer to the remote_write specification for all available fields:
// https://github.com/grafana/agent/blob/master/docs/configuration-reference.md#remote_write
// https://github.com/grafana/agent/blob/main/docs/configuration-reference.md#remote_write
withRemoteWrite(remote_writes):: {
assert std.objectHasAll(self, '_mode') : |||
withPrometheusInstances must be merged with the result of calling new,
newDeployment, or newScrapingService.
|||,
assert std.objectHasAll(self, '_prometheus_instances'): |||
assert std.objectHasAll(self, '_prometheus_instances') : |||
withRemoteWrite must be merged with the result of calling
withPrometheusInstances.
|||,
Expand All @@ -105,7 +105,7 @@ local scrape_k8s = import '../internal/kubernetes_instance.libsonnet';

_prometheus_instances:: std.map(function(inst) inst {
remote_write: list,
}, super._prometheus_instances)
}, super._prometheus_instances),
},

// scrapeInstanceKubernetes defines an instance config Grafana Labs uses to
Expand Down
28 changes: 14 additions & 14 deletions production/tanka/grafana-agent/v1/lib/tempo.libsonnet
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
// withTempoConfig adds a Tempo config to collect traces.
//
// withTempoConfig adds a Tempo config to collect traces.
//
// For the full list of options, refer to the configuration reference:
//
withTempoConfig(config):: {
Expand All @@ -10,10 +10,10 @@
_tempo_config:: config,
},

// withTempoPushConfig configures a location to write traces to.
//
// withTempoPushConfig configures a location to write traces to.
//
// Availabile options can be found in the configuration reference:
// https://github.com/grafana/agent/blob/master/docs/configuration-reference.md#tempo_config
// https://github.com/grafana/agent/blob/main/docs/configuration-reference.md#tempo_config
withTempoPushConfig(push_config):: {
assert std.objectHasAll(self, '_tempo_config') : |||
withTempoPushConfig must be merged with the result of calling
Expand All @@ -22,8 +22,8 @@
_tempo_config+:: { push_config: push_config },
},

// withTempoSamplingStrategies accepts an object for trace sampling strategies.
//
// withTempoSamplingStrategies accepts an object for trace sampling strategies.
//
// Refer to Jaeger's documentation for available fields:
// https://www.jaegertracing.io/docs/1.17/sampling/#collector-sampling-configuration
//
Expand All @@ -35,14 +35,14 @@
withTempoConfig.
|||,

assert
std.objectHasAll(self._tempo_config, 'receivers') &&
std.objectHasAll(self._tempo_config.receivers, 'jaeger') : |||
withStrategies can only be used if the tempo config is configured for
assert
std.objectHasAll(self._tempo_config, 'receivers') &&
std.objectHasAll(self._tempo_config.receivers, 'jaeger') : |||
withStrategies can only be used if the tempo config is configured for
receiving Jaeger spans and traces.
|||,

// The main library should detect the presence of _tempo_sampling_strategies
// The main library should detect the presence of _tempo_sampling_strategies
// and create a ConfigMap bound to /etc/agent/strategies.json.
_tempo_sampling_strategies:: strategies,
_tempo_config+:: {
Expand All @@ -57,7 +57,7 @@
},
},

// Configures scrape_configs for discovering meta labels that will be attached
// Configures scrape_configs for discovering meta labels that will be attached
// to incoming metrics and spans whose IP matches the __address__ of the
// target.
withTempoScrapeConfigs(scrape_configs):: {
Expand Down Expand Up @@ -97,5 +97,5 @@
insecure_skip_verify: false,
},
},
]
],
}