Skip to content

Commit

Permalink
Reorganize docs.rs main page and build examples in CI (#136)
Browse files Browse the repository at this point in the history
* Reorganize docs.rs main page

* Build other examples in CI

* Fix example name

* Update otel-push example
  • Loading branch information
emschwartz authored Aug 8, 2023
1 parent cb6e251 commit ff1a7cb
Show file tree
Hide file tree
Showing 7 changed files with 52 additions and 43 deletions.
5 changes: 4 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,11 @@ jobs:
- run: cargo build --package example-actix-web
- run: cargo build --package example-axum
- run: cargo build --package example-custom-metrics
- run: cargo build --package example-full-api
- run: cargo build --package example-exemplars-tracing
- run: cargo build --package example-exemplars-tracing-opentelemetry
- run: cargo build --package example-full-api
- run: cargo build --package example-opentelemetry-push


# Make sure the docs can be built
- run: cargo doc --all-features
2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ repository = "https://github.com/autometrics-dev/autometrics-rs"
homepage = "https://autometrics.dev"
license = "MIT OR Apache-2.0"
keywords = ["metrics", "prometheus", "opentelemetry"]
categories = ["development-tools::profiling"]
categories = ["development-tools::debugging", "development-tools::profiling"]

[workspace]
default-members = ["autometrics", "autometrics-cli", "autometrics-macros"]
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Autometrics provides a macro that makes it trivial to instrument any function wi
- 💡 Generates powerful Prometheus queries to help quickly identify and debug issues in production
- 🔗 Injects links to live Prometheus charts directly into each function's doc comments
- [📊 Grafana dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) work without configuration to visualize the performance of functions & [SLOs](https://docs.rs/autometrics/latest/autometrics/objectives/index.html)
- 🔍 Correlates your code's version with metrics to help [identify commits](https://docs.rs/autometrics/latest/autometrics/#identifying-faulty-commits-with-the-build_info-metric) that introduced errors or latency
- 🔍 Correlates your code's version with metrics to help identify commits that introduced errors or latency
- 📏 Standardizes metrics across services and teams to improve debugging
- ⚖️ Function-level metrics provide useful granularity without exploding cardinality
- [⚡ Minimal runtime overhead](#benchmarks)
Expand Down
2 changes: 1 addition & 1 deletion autometrics/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Autometrics provides a macro that makes it trivial to instrument any function wi
- 💡 Generates powerful Prometheus queries to help quickly identify and debug issues in production
- 🔗 Injects links to live Prometheus charts directly into each function's doc comments
- [📊 Grafana dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) work without configuration to visualize the performance of functions & [SLOs](https://docs.rs/autometrics/latest/autometrics/objectives/index.html)
- 🔍 Correlates your code's version with metrics to help [identify commits](https://docs.rs/autometrics/latest/autometrics/#identifying-faulty-commits-with-the-build_info-metric) that introduced errors or latency
- 🔍 Correlates your code's version with metrics to help identify commits that introduced errors or latency
- 📏 Standardizes metrics across services and teams to improve debugging
- ⚖️ Function-level metrics provide useful granularity without exploding cardinality
- [⚡ Minimal runtime overhead](#benchmarks)
Expand Down
64 changes: 38 additions & 26 deletions autometrics/src/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,26 +10,26 @@ Metrics are a powerful and cost-efficient tool for understanding the health and

Autometrics provides a macro that makes it trivial to instrument any function with the most useful metrics: request rate, error rate, and latency. It standardizes these metrics and then generates powerful Prometheus queries based on your function details to help you quickly identify and debug issues in production.

## Benefits
# Benefits

- [`#[autometrics]`](autometrics) macro adds useful metrics to any function or `impl` block, without you thinking about what metrics to collect
- 💡 Generates powerful Prometheus queries to help quickly identify and debug issues in production
- 🔗 Injects links to live Prometheus charts directly into each function's doc comments
- [📊 Grafana dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) work without configuration to visualize the performance of functions & [SLOs](objectives)
- 🔍 Correlates your code's version with metrics to help [identify commits](#identifying-faulty-commits-with-the-build_info-metric) that introduced errors or latency
- 🔍 Correlates your code's version with metrics to help identify commits that introduced errors or latency
- 📏 Standardizes metrics across services and teams to improve debugging
- ⚖️ Function-level metrics provide useful granularity without exploding cardinality
- [⚡ Minimal runtime overhead](https://github.com/autometrics-dev/autometrics-rs#benchmarks)

## Advanced Features
# Advanced Features

- [🚨 Define alerts](objectives) using SLO best practices directly in your source code
- [📍 Attach exemplars](exemplars) automatically to connect metrics with traces
- [⚙️ Configurable](#metrics-backends) metric collection library ([`opentelemetry`](https://crates.io/crates/opentelemetry), [`prometheus`](https://crates.io/crates/prometheus), [`prometheus-client`](https://crates.io/crates/prometheus-client) or [`metrics`](https://crates.io/crates/metrics))

See [autometrics.dev](https://docs.autometrics.dev/) for more details on the ideas behind autometrics.

## Example Axum App
# Example Axum App

Autometrics isn't tied to any web framework, but this shows how you can use the library in an [Axum](https://github.com/tokio-rs/axum) server.

Expand Down Expand Up @@ -59,21 +59,23 @@ pub async fn main() {
}
```

## Identifying faulty commits with the `build_info` metric
# Configuring Autometrics

The `build_info` metric makes it easy to correlate production issues with the commit or version that may have introduced bugs or latency (see [this blog post](https://fiberplane.com/blog/autometrics-rs-0-4-spot-commits-that-introduce-errors-or-slow-down-your-application) for details).
Because Autometrics combines a macro and a library, and supports multiple underlying metrics libraries, different settings are configured in different places.

By default, it attaches the `version` label, but you can also set up your project so that it attaches Git-related labels as well:
## `AutometricsSettings`

| Label | Compile-Time Environment Variables | Default |
|---|---|---|
| `version` | `AUTOMETRICS_VERSION` or `CARGO_PKG_VERSION` | `CARGO_PKG_VERSION` (set by cargo by default) |
| `commit` | `AUTOMETRICS_COMMIT` or `VERGEN_GIT_COMMIT` | `""` |
| `branch` | `AUTOMETRICS_BRANCH` or `VERGEN_GIT_BRANCH` | `""` |
See [`settings`].

### (Optional) Using `vergen` to set the Git details
## `build.rs` File

You can use the [`vergen`](https://crates.io/crates/vergen) crate to expose the Git information to Autometrics, which will then attach the labels to the `build_info` metric.
### Including Git commit details in the metrics

Autometrics produces a `build_info` metric and writes queries that make it easy to correlate production issues with the commit or version that may have introduced bugs or latency (see [this blog post](https://fiberplane.com/blog/autometrics-rs-0-4-spot-commits-that-introduce-errors-or-slow-down-your-application) for details).

The `version` label is set automatically based on the version in your crate's `Cargo.toml` file.

You can set `commit` and `branch` labels using the `AUTOMETRICS_COMMIT` and `AUTOMETRICS_BRANCH` environment variables , or you can use the [`vergen`](https://crates.io/crates/vergen) crate to attach them automatically:

```sh
cargo add vergen --features git,gitcl
Expand All @@ -91,11 +93,9 @@ pub fn main() {
}
```

## Configuring Autometrics

### Custom Prometheus URL

Autometrics inserts Prometheus query links into function documentation. By default, the links point to `http://localhost:9090` but you can configure it to use a custom URL using an environment variable in your `build.rs` file:
The Autometrics macro inserts Prometheus query links into function documentation. By default, the links point to `http://localhost:9090` but you can configure it to use a custom URL using a compile-time environment variable in your `build.rs` file:

```rust
// build.rs
Expand All @@ -107,31 +107,43 @@ pub fn main() {
}
```

### Feature flags
### Disabling documentation generation

If you do not want Autometrics to insert Prometheus query links into the function documentation, set the `AUTOMETRICS_DISABLE_DOCS` compile-time environment variable:

```rust
// build.rs

pub fn main() {
println!("cargo:rustc-env=AUTOMETRICS_DISABLE_DOCS=1");
}
```

## Feature flags

### Exporting metrics

- `prometheus-exporter` - exports a Prometheus metrics collector and exporter. This is compatible with any of the [Metrics backends](#metrics-backends) and uses `prometheus-client` by default if none are explicitly selected

#### Metrics backends
### Metrics backends

> **Note**
>
> If you are **not** using the `prometheus-exporter`, you must ensure that you are using the exact same version of the metrics library as `autometrics` (and it must come from `crates.io` rather than git or another source). If not, the autometrics metrics will not appear in your exported metrics.
> If you are exporting metrics yourself rather than using the `prometheus-exporter`, you must ensure that you are using the exact same version of the metrics library as `autometrics` (and it must come from `crates.io` rather than git or another source). If not, the autometrics metrics will not appear in your exported metrics.
- `opentelemetry-0_20` - use the [opentelemetry](https://crates.io/crates/opentelemetry) crate for producing metrics.
- `metrics-0_21` - use the [metrics](https://crates.io/crates/metrics) crate for producing metrics
- `prometheus-0_13` - use the [prometheus](https://crates.io/crates/prometheus) crate for producing metrics
- `prometheus-client-0_21` - use the official [prometheus-client](https://crates.io/crates/prometheus-client) crate for producing metrics

#### Exemplars (for integrating metrics with traces)
### Exemplars (for integrating metrics with traces)

See the [exemplars module docs](https://docs.rs/autometrics/latest/autometrics/exemplars/index.html) for details about these features. Currently only supported with the `prometheus-client` backend.

- `exemplars-tracing` - extract arbitrary fields from `tracing::Span`s
- `exemplars-tracing-opentelemetry` - extract the `trace_id` and `span_id` from the `opentelemetry::Context`, which is attached to `tracing::Span`s by the `tracing-opentelemetry` crate
- `exemplars-tracing-opentelemetry-0_20` - extract the `trace_id` and `span_id` from the `opentelemetry::Context`, which is attached to `tracing::Span`s by the `tracing-opentelemetry` crate

#### Custom objective values
### Custom objective values

By default, Autometrics supports a fixed set of percentiles and latency thresholds for objectives. Use these features to enable custom values:
By default, Autometrics supports a fixed set of percentiles and latency thresholds for [`objectives`]. Use these features to enable custom values:

- `custom-objective-latency` - enable this to use custom latency thresholds. Note, however, that the custom latency **must** match one of the buckets configured for your histogram or the queries, recording rules, and alerts will not work.
- `custom-objective-percentile` - enable this to use custom objective percentiles. Note, however, that using custom percentiles requires generating a different recording and alerting rules file using the CLI + Sloth (see [here](https://github.com/autometrics-dev/autometrics-rs/tree/main/autometrics-cli)).
2 changes: 1 addition & 1 deletion examples/opentelemetry-push/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ autometrics-example-util = { path = "../util" }
# the version used by autometrics
opentelemetry = { version = "0.20", features = ["metrics", "rt-tokio"] }
opentelemetry-otlp = { version = "0.13", features = ["tonic", "metrics"] }
opentelemetry-semantic-conventions = { version = "0.11" }
opentelemetry-semantic-conventions = { version = "0.12.0" }
tokio = { version = "1", features = ["full"] }
18 changes: 6 additions & 12 deletions examples/opentelemetry-push/src/main.rs
Original file line number Diff line number Diff line change
@@ -1,27 +1,21 @@
use autometrics::autometrics;
use autometrics_example_util::sleep_random_duration;
use opentelemetry::metrics::MetricsError;
use opentelemetry::sdk::metrics::MeterProvider;
use opentelemetry::{runtime, Context};
use opentelemetry::sdk::export::metrics::aggregation::cumulative_temporality_selector;
use opentelemetry::sdk::metrics::controllers::BasicController;
use opentelemetry::sdk::metrics::selectors;
use opentelemetry::metrics;
use opentelemetry_otlp::{ExportConfig, WithExportConfig};
use tokio::time::sleep;
use std::error::Error;
use std::time::Duration;
use tokio::time::sleep;

fn init_metrics() -> metrics::Result<BasicController> {
fn init_metrics() -> Result<MeterProvider, MetricsError> {
let export_config = ExportConfig {
endpoint: "http://localhost:4317".to_string(),
..ExportConfig::default()
};
let push_interval = Duration::from_secs(1);
opentelemetry_otlp::new_pipeline()
.metrics(
selectors::simple::inexpensive(),
cumulative_temporality_selector(),
runtime::Tokio,
)
.metrics(runtime::Tokio)
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
Expand All @@ -48,7 +42,7 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {

println!("Waiting so that we could see metrics going down...");
sleep(Duration::from_secs(10)).await;
meter_provider.stop(&cx)?;
meter_provider.force_flush(&cx)?;

Ok(())
}

0 comments on commit ff1a7cb

Please sign in to comment.