You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Rust Runtime for Lambda implements Lambda's interface for custom runtimes. Internally, the runtime uses a polling strategy to receive invoke events as JSON data from Lambda. Then, it transforms those JSON events into typed structures that are processed by the Rust code that our users implement.
The logic that polls for events, transforms them into structures, and calls the user defined call is all encapsulated in the Runtime::run function. This high coupling makes it very hard to extend the runtime, or even give people some extra control over what the runtime does. People need to copy parts of the runtime when they want to extend it, like we initially did with the streaming support. This extra overhead shows that the runtime lacks extensibility and control that some users need.
I'd like to propose to break the Runtime into Tower services, and add a layer system that allows users to wrap the Runtime's internals with user defined logic. The following example shows how users would define a layer and instruct the runtime to wrap our core logic around:
use lambda_runtime::{ErrorasLambdaError,Invocation,Runtime};use futures_util::future::BoxFuture;use tower::{Service,Layer};use std::task::{Context,Poll};#[derive(Clone)]structLambdaTelemetryLayer;impl<S>Layer<S>forLambdaTelemetryLayer{typeService = LambdaTelemetryMiddleware<S>;fnlayer(&self,inner:S) -> Self::Service{LambdaTelemetryMiddleware{ inner }}}#[derive(Clone)]structLambdaTelemetryMiddleware<S>{inner:S,}impl<S>Service<Invocation>forLambdaTelemetryMiddleware<S>whereS:Service<Invocation,Response = ()> + Send + 'static,S::Future:Send + 'static,S::Error:LambdaError,{typeResponse = ();typeError = S::Error;// `BoxFuture` is a type alias for `Pin<Box<dyn Future + Send + 'a>>`typeFuture = BoxFuture<'static,Result<Self::Response,Self::Error>>;fnpoll_ready(&mutself,cx:&mutContext<'_>) -> Poll<Result<(),Self::Error>>{self.inner.poll_ready(cx)}fncall(&mutself,invoke:Invocation) -> Self::Future{let future = self.inner.call(invoke);// Flush metrics here.Box::pin(asyncmove{
future.await})}}asyncfnmy_handler(_event:LambdaEvent<serde_json::Value>) -> Result<(),Error>{Ok(())}#[tokio::main]asyncfnmain() -> Result<(),Error>{let func = service_fn(my_handler);Runtime::initialize(func).layer(LambdaTelemetryLayer).run().await}
If you're familiar with Axum, and Tower's middleware system, you'll notice that this is pretty much a copy of that implementation.
In the previous example, we're using a new type Invocation to pass the raw information received from Lambda. This type could be an alias of the http::Response that we receive, but I think it'd be more clear if it's an internally defined type:
structInvocation{// The http Response Parts received from Lambdapubparts: http::response::Parts,// The http Response body received from Lambdapubbody: hyper::Body,}
This new type could include utility functions to extract some of the context information that we provide to the current lambda functions, before the function gets invoked, for example the Context.
Implementation details
To implement this layering system, we'd make the runtime implement Tower's Service trait to process the invocation from Lambda. This new service would carry the implementation that you can see in this block of code in the current Runtime.
Because we're implementing the Service trait, we could also cleanup some of that code:
We would not need to check if the handler is ready manually. That would be the responsibility of the Service::poll_ready function.
The panic handling logic could be removed and provided by an additional internal layer.
The current Runtime::run function would be reduced to something like this:
And the Service implementation would be isolated to something like this:
impl<T>Service<Invocation>forHandler<T>whereT:for<'de>Deserialize<'de> + Send,{typeResponse = ();typeError = Error;typeFuture = BoxFuture<'static,Result<Self::Response,Self::Error>>;#[inline]fnpoll_ready(&mutself,cx:&mut std::task::Context<'_>) -> std::task::Poll<Result<(),Self::Error>>{self.fn_handler.poll_ready(cx)}fncall(&mutself,invoke:Invocation) -> Self::Future{#[cfg(debug_assertions)]if&invoke.parts.status == &http::StatusCode::NO_CONTENT{// Ignore the event if the status code is 204.// This is a way to keep the runtime alive when// there are no events pending to be processed.returnBox::pin(future::ready(Ok(())));}let request_id = invoke.request_id.to_string();let(parts, body, ctx) = invoke.into_parts();let request_span = ctx.request_span();let fut = asyncmove{let body = hyper::body::to_bytes(body).await?;trace!("response body - {}", std::str::from_utf8(&body)?);#[cfg(debug_assertions)]if parts.status.is_server_error(){error!("Lambda Runtime server returned an unexpected error");returnErr(parts.status.to_string().into());}let lambda_event = match deserializer::deserialize(&body, ctx){Ok(lambda_event) => lambda_event,Err(err) => {let req = build_event_error_request(&request_id, err)?;self.client.call(req).await.expect("Unable to send response to Runtime APIs");returnOk(());}};let response = self.fn_handler.call(lambda_event).await;let req = match response {Ok(response) => {trace!("Ok response from handler (run loop)");EventCompletionRequest{request_id:&request_id,body: response,_unused_b:PhantomData,_unused_s:PhantomData,}.into_req()}Err(err) => build_event_error_request(&request_id, err),}?;self.client.call(req).await.expect("Unable to send response to Runtime APIs");Ok(())}.instrument(request_span);Box::pin(fut)}}
With that separation of responsibilities, the Runtime could implement a layering mechanism using Tower's Layer traits:
The new service handler would be initialized with the Runtime::initialize function, and it'd be the code that ends up calling the user provided function handler:
I've tried to implement this and I've bumped into a lot of problems with our type system. It probably requires more knowledge about Tower than I currently have. It's not as trivial as taking the code that I drafted above and putting it in the project, hence this RFC.
Summary
By implementing this new layering system, the Rust runtime for Lambda would be more flexible and easy to extend. It could help users share layers easily because they could use the already established Tower's layering ecosystem. Issues like #691 would be fairly easy to implement and share with the community.
The text was updated successfully, but these errors were encountered:
This issue is now closed. Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
The Rust Runtime for Lambda implements Lambda's interface for custom runtimes. Internally, the runtime uses a polling strategy to receive invoke events as JSON data from Lambda. Then, it transforms those JSON events into typed structures that are processed by the Rust code that our users implement.
The logic that polls for events, transforms them into structures, and calls the user defined call is all encapsulated in the Runtime::run function. This high coupling makes it very hard to extend the runtime, or even give people some extra control over what the runtime does. People need to copy parts of the runtime when they want to extend it, like we initially did with the streaming support. This extra overhead shows that the runtime lacks extensibility and control that some users need.
I'd like to propose to break the Runtime into Tower services, and add a layer system that allows users to wrap the Runtime's internals with user defined logic. The following example shows how users would define a layer and instruct the runtime to wrap our core logic around:
If you're familiar with Axum, and Tower's middleware system, you'll notice that this is pretty much a copy of that implementation.
In the previous example, we're using a new type
Invocation
to pass the raw information received from Lambda. This type could be an alias of thehttp::Response
that we receive, but I think it'd be more clear if it's an internally defined type:This new type could include utility functions to extract some of the context information that we provide to the current lambda functions, before the function gets invoked, for example the Context.
Implementation details
To implement this layering system, we'd make the runtime implement Tower's Service trait to process the invocation from Lambda. This new service would carry the implementation that you can see in this block of code in the current Runtime.
Because we're implementing the Service trait, we could also cleanup some of that code:
We would not need to check if the handler is ready manually. That would be the responsibility of the Service::poll_ready function.
The panic handling logic could be removed and provided by an additional internal layer.
The current Runtime::run function would be reduced to something like this:
And the Service implementation would be isolated to something like this:
With that separation of responsibilities, the Runtime could implement a layering mechanism using Tower's Layer traits:
The new service handler would be initialized with the
Runtime::initialize
function, and it'd be the code that ends up calling the user provided function handler:Caveats
I've tried to implement this and I've bumped into a lot of problems with our type system. It probably requires more knowledge about Tower than I currently have. It's not as trivial as taking the code that I drafted above and putting it in the project, hence this RFC.
Summary
By implementing this new layering system, the Rust runtime for Lambda would be more flexible and easy to extend. It could help users share layers easily because they could use the already established Tower's layering ecosystem. Issues like #691 would be fairly easy to implement and share with the community.
The text was updated successfully, but these errors were encountered: