Skip to content

Commit

Permalink
fix(security): CVE-2024-43783: Enforce body limits early in request p…
Browse files Browse the repository at this point in the history
…ipeline

This fixes a weakness (classified as [CWE-770]) which made it possible to
exceed the configured request payload maximums set with the
[`limits.http_max_request_bytes`] configuration option when used in
conjunction with certain configurations.

Review the Github Advisory, [GHSA-x6xq-whh3-gg32], for specific details and
impacted configurations.

After the fix:

- Request body payload limits are now enforced earlier in the pipeline, ensuring that coprocessors and user plugins respect the configured limit.
- Reading a request body beyond the configured limit will abort the request and return a [HTTP 413] (Content Too Large) response to the client rather than delgating to the code consuming the body.  To use different limits, `limits.http_max_request_bytes` must be configured to the desired value.
- Coprocessors, Rhai and Rust plugins do NOT have an opportunity to intercept aborted requests.  Use the telemetry features of the router to observe HTTP 413 events.

[CWE-770]: https://cwe.mitre.org/data/definitions/770.html
[GHSA-x6xq-whh3-gg32]: GHSA-x6xq-whh3-gg32
[HTTP 413]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/413
[`limits.http_max_request_bytes`]: https://www.apollographql.com/docs/router/configuration/overview/#http_max_request_bytes

---------

Co-authored-by: bryn <bryn@apollographql.com>
Co-authored-by: Gary Pennington <gary@apollographql.com>
Co-authored-by: Jeremy Lempereur <jeremy.lempereur@iomentum.com>
  • Loading branch information
3 people authored and abernix committed Aug 27, 2024
1 parent b0a601d commit 7a9c020
Show file tree
Hide file tree
Showing 25 changed files with 1,591 additions and 437 deletions.
34 changes: 34 additions & 0 deletions .changesets/fix_bryn_limits.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
### Payload limits may exceed configured maximum ([Issue #ISSUE_NUMBER](https://github.com/apollographql/router/issues/ISSUE_NUMBER))

When processing requests the configured limits as defined in the `limits` section may be ignored:
```yaml
limits:
http_max_request_bytes: 2000000
```
Plugins that execute services during the `router` lifecycle will not respect the configured limits. Potentially leading to a denial of service attack vector.

#### Built features affected:
* Coprocessors configured to send the entire body of a request are vulnerable to this issue:
```yaml
coprocessor:
url: http://localhost:8080
router:
request:
body: true
```

#### Fix details
Body size limits are now moved to earlier in the pipeline to ensure that coprocessors and user plugins respect
the configured limits.
Reading a request body past the configured limit will now abort the request and return a 413 response
to the client instead of delegating to the code reading the body to handle the error.

#### User impact
Body size limits are now enforced for all requests in the main graphql router pipeline. Custom plugins are covered by
this and any attempt to read the body past the configured limit will abort the request and return a 413 response to the client.

Coprocessors, rhai and native plugins do not have an opportunity to intercept aborted requests. It is advised to use
the telemetry features within the router if you need to track these events.

By [@bryncooke](https://github.com/AUTHOR) in https://github.com/apollographql/router/pull/PULL_NUMBER
114 changes: 35 additions & 79 deletions apollo-router/src/axum_factory/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,6 @@ use test_log::test;
use tokio::io::AsyncRead;
use tokio::io::AsyncReadExt;
use tokio::io::AsyncWriteExt;
#[cfg(unix)]
use tokio::io::BufReader;
use tokio::sync::mpsc;
use tokio_util::io::StreamReader;
use tower::service_fn;
Expand Down Expand Up @@ -1270,14 +1268,11 @@ async fn it_answers_to_custom_endpoint() -> Result<(), ApolloRouterError> {
Ok::<_, BoxError>(
http::Response::builder()
.status(StatusCode::OK)
.body(
format!(
"{} + {}",
req.router_request.method(),
req.router_request.uri().path()
)
.into(),
)
.body(format!(
"{} + {}",
req.router_request.method(),
req.router_request.uri().path()
))
.unwrap()
.into(),
)
Expand Down Expand Up @@ -1380,14 +1375,11 @@ async fn it_refuses_to_bind_two_extra_endpoints_on_the_same_path() {
Ok::<_, BoxError>(
http::Response::builder()
.status(StatusCode::OK)
.body(
format!(
"{} + {}",
req.router_request.method(),
req.router_request.uri().path()
)
.into(),
)
.body(format!(
"{} + {}",
req.router_request.method(),
req.router_request.uri().path()
))
.unwrap()
.into(),
)
Expand Down Expand Up @@ -2076,88 +2068,52 @@ async fn listening_to_unix_socket() {
.await;

assert_eq!(
serde_json::from_slice::<graphql::Response>(&output).unwrap(),
serde_json::from_str::<graphql::Response>(&output).unwrap(),
expected_response,
);

// Get query
let output = send_to_unix_socket(
server.graphql_listen_address().as_ref().unwrap(),
Method::GET,
r#"query=query%7Bme%7Bname%7D%7D"#,
r#"/?query=query%7Bme%7Bname%7D%7D"#,
)
.await;

assert_eq!(
serde_json::from_slice::<graphql::Response>(&output).unwrap(),
serde_json::from_str::<graphql::Response>(&output).unwrap(),
expected_response,
);

server.shutdown().await.unwrap();
}

#[cfg(unix)]
async fn send_to_unix_socket(addr: &ListenAddr, method: Method, body: &str) -> Vec<u8> {
use tokio::io::AsyncBufReadExt;
use tokio::io::Interest;
async fn send_to_unix_socket(addr: &ListenAddr, method: Method, body: &str) -> String {
use tokio::net::UnixStream;

let content = match method {
Method::GET => {
format!(
"{} /?{} HTTP/1.1\r
Host: localhost:4100\r
Content-Length: {}\r
Content-Type: application/json\r
Accept: application/json\r
\n",
method.as_str(),
body,
body.len(),
)
}
Method::POST => {
format!(
"{} / HTTP/1.1\r
Host: localhost:4100\r
Content-Length: {}\r
Content-Type: application/json\r
Accept: application/json\r
{}\n",
method.as_str(),
body.len(),
body
)
}
_ => {
unimplemented!()
}
};
let mut stream = UnixStream::connect(addr.to_string()).await.unwrap();
stream.ready(Interest::WRITABLE).await.unwrap();
stream.write_all(content.as_bytes()).await.unwrap();
stream.flush().await.unwrap();
let stream = BufReader::new(stream);
let mut lines = stream.lines();
let header_first_line = lines
.next_line()
.await
.unwrap()
.expect("no header received");
// skip the rest of the headers
let mut headers = String::new();
let mut stream = lines.into_inner();
loop {
if stream.read_line(&mut headers).await.unwrap() == 2 {
break;
let stream = UnixStream::connect(addr.to_string()).await.unwrap();
let (mut sender, conn) = hyper::client::conn::handshake(stream).await.unwrap();
tokio::task::spawn(async move {
if let Err(err) = conn.await {
println!("Connection failed: {:?}", err);
}
});

let http_body = hyper::Body::from(body.to_string());
let mut request = http::Request::builder()
.method(method.clone())
.header("Host", "localhost:4100")
.header("Content-Type", "application/json")
.header("Accept", "application/json")
.body(http_body)
.unwrap();
if method == Method::GET {
*request.uri_mut() = body.parse().unwrap();
}
// get rest of the buffer as body
let body = stream.buffer().to_vec();
assert!(header_first_line.contains(" 200 "), "");
body

let response = sender.send_request(request).await.unwrap();
let body = response.collect().await.unwrap().to_bytes();
String::from_utf8(body.to_vec()).unwrap()
}

#[tokio::test]
Expand Down
118 changes: 13 additions & 105 deletions apollo-router/src/configuration/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ use crate::configuration::schema::Mode;
use crate::graphql;
use crate::notification::Notify;
use crate::plugin::plugins;
use crate::plugins::limits;
use crate::plugins::subscription::SubscriptionConfig;
use crate::plugins::subscription::APOLLO_SUBSCRIPTION_PLUGIN;
use crate::plugins::subscription::APOLLO_SUBSCRIPTION_PLUGIN_NAME;
Expand Down Expand Up @@ -154,7 +155,7 @@ pub struct Configuration {

/// Configuration for operation limits, parser limits, HTTP limits, etc.
#[serde(default)]
pub(crate) limits: Limits,
pub(crate) limits: limits::Config,

/// Configuration for chaos testing, trying to reproduce bugs that require uncommon conditions.
/// You probably don’t want this in production!
Expand Down Expand Up @@ -251,18 +252,25 @@ impl<'de> serde::Deserialize<'de> for Configuration {
tls: Tls,
apq: Apq,
persisted_queries: PersistedQueries,
limits: Limits,
limits: limits::Config,
experimental_chaos: Chaos,
batching: Batching,
experimental_type_conditioned_fetching: bool,
experimental_apollo_metrics_generation_mode: ApolloMetricsGenerationMode,
experimental_query_planner_mode: QueryPlannerMode,
}
let ad_hoc: AdHocConfiguration = serde::Deserialize::deserialize(deserializer)?;
let mut ad_hoc: AdHocConfiguration = serde::Deserialize::deserialize(deserializer)?;

let notify = Configuration::notify(&ad_hoc.apollo_plugins.plugins)
.map_err(|e| serde::de::Error::custom(e.to_string()))?;

// Allow the limits plugin to use the configuration from the configuration struct.
// This means that the limits plugin will get the regular configuration via plugin init.
ad_hoc.apollo_plugins.plugins.insert(
"limits".to_string(),
serde_json::to_value(&ad_hoc.limits).unwrap(),
);

// Use a struct literal instead of a builder to ensure this is exhaustive
Configuration {
health_check: ad_hoc.health_check,
Expand Down Expand Up @@ -319,7 +327,7 @@ impl Configuration {
tls: Option<Tls>,
apq: Option<Apq>,
persisted_query: Option<PersistedQueries>,
operation_limits: Option<Limits>,
operation_limits: Option<limits::Config>,
chaos: Option<Chaos>,
uplink: Option<UplinkConfig>,
experimental_type_conditioned_fetching: Option<bool>,
Expand Down Expand Up @@ -439,7 +447,7 @@ impl Configuration {
notify: Option<Notify<String, graphql::Response>>,
apq: Option<Apq>,
persisted_query: Option<PersistedQueries>,
operation_limits: Option<Limits>,
operation_limits: Option<limits::Config>,
chaos: Option<Chaos>,
uplink: Option<UplinkConfig>,
batching: Option<Batching>,
Expand Down Expand Up @@ -856,106 +864,6 @@ impl Supergraph {
}
}

/// Configuration for operation limits, parser limits, HTTP limits, etc.
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema)]
#[serde(deny_unknown_fields, default)]
pub(crate) struct Limits {
/// If set, requests with operations deeper than this maximum
/// are rejected with a HTTP 400 Bad Request response and GraphQL error with
/// `"extensions": {"code": "MAX_DEPTH_LIMIT"}`
///
/// Counts depth of an operation, looking at its selection sets,
/// including fields in fragments and inline fragments. The following
/// example has a depth of 3.
///
/// ```graphql
/// query getProduct {
/// book { # 1
/// ...bookDetails
/// }
/// }
///
/// fragment bookDetails on Book {
/// details { # 2
/// ... on ProductDetailsBook {
/// country # 3
/// }
/// }
/// }
/// ```
pub(crate) max_depth: Option<u32>,

/// If set, requests with operations higher than this maximum
/// are rejected with a HTTP 400 Bad Request response and GraphQL error with
/// `"extensions": {"code": "MAX_DEPTH_LIMIT"}`
///
/// Height is based on simple merging of fields using the same name or alias,
/// but only within the same selection set.
/// For example `name` here is only counted once and the query has height 3, not 4:
///
/// ```graphql
/// query {
/// name { first }
/// name { last }
/// }
/// ```
///
/// This may change in a future version of Apollo Router to do
/// [full field merging across fragments][merging] instead.
///
/// [merging]: https://spec.graphql.org/October2021/#sec-Field-Selection-Merging]
pub(crate) max_height: Option<u32>,

/// If set, requests with operations with more root fields than this maximum
/// are rejected with a HTTP 400 Bad Request response and GraphQL error with
/// `"extensions": {"code": "MAX_ROOT_FIELDS_LIMIT"}`
///
/// This limit counts only the top level fields in a selection set,
/// including fragments and inline fragments.
pub(crate) max_root_fields: Option<u32>,

/// If set, requests with operations with more aliases than this maximum
/// are rejected with a HTTP 400 Bad Request response and GraphQL error with
/// `"extensions": {"code": "MAX_ALIASES_LIMIT"}`
pub(crate) max_aliases: Option<u32>,

/// If set to true (which is the default is dev mode),
/// requests that exceed a `max_*` limit are *not* rejected.
/// Instead they are executed normally, and a warning is logged.
pub(crate) warn_only: bool,

/// Limit recursion in the GraphQL parser to protect against stack overflow.
/// default: 500
pub(crate) parser_max_recursion: usize,

/// Limit the number of tokens the GraphQL parser processes before aborting.
pub(crate) parser_max_tokens: usize,

/// Limit the size of incoming HTTP requests read from the network,
/// to protect against running out of memory. Default: 2000000 (2 MB)
pub(crate) http_max_request_bytes: usize,
}

impl Default for Limits {
fn default() -> Self {
Self {
// These limits are opt-in
max_depth: None,
max_height: None,
max_root_fields: None,
max_aliases: None,
warn_only: false,
http_max_request_bytes: 2_000_000,
parser_max_tokens: 15_000,

// This is `apollo-parser`’s default, which protects against stack overflow
// but is still very high for "reasonable" queries.
// https://github.com/apollographql/apollo-rs/blob/apollo-parser%400.7.3/crates/apollo-parser/src/parser/mod.rs#L93-L104
parser_max_recursion: 500,
}
}
}

/// Router level (APQ) configuration
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, Default)]
#[serde(deny_unknown_fields)]
Expand Down
Loading

0 comments on commit 7a9c020

Please sign in to comment.