Skip to content

Conversation

@davidbarsky
Copy link
Contributor

@davidbarsky davidbarsky commented Dec 26, 2018

This PR includes a sample refactor/async runtime and a mock runtime for testing. Neither are feature complete, but I wanted to get feedback on this before going forward.

@softprops
Copy link
Contributor

I'll post some feedback in line but it may be good to enumerate the motivation/goals of the change set up a context for review/feedback.

Copy link
Contributor

@softprops softprops left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my favorite change so far here is the serde feature flag but I feel like that could easily be a simple separate pull 'n merge independency of the strawman.

[dependencies]
http = "0.1"
serde = "^1"
serde = { version = "^1", features = ["derive"] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. I just learned about this serde feature flag yesterday when trying to solve a similar problem in dynomite :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! It's a cool feature!

name = "simple-lambda-runtime"
version = "0.1.0"
authors = ["David Barsky <dbarsky@amazon.com>"]
edition = "2018"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd add a description key to the package to describe what this is or what is purpose is. Might may review clearer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good call, will do.

}

fn call(&mut self, req: Event) -> Self::Future {
result(self.run(req))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read thought this a couple times and was a bit confused about the motivation to a a newtower_service dependency the purpose of this Service impl. It seems it's literally not adding any value over calling futures ::future:::result(self.run(req)) directly. Can you add some comments describing the need for the dependency?

To set up context for where I'm coming from. I'm a bit picky about adding dependencies to the "low level" crate because my deployments are already growing in size even with the most minimal Rust function deployments. Unless dependency provides some significant value, I feel like my function deployments are being punished both in terms of artifact size as well as ci deployment time (cargo will need to compile tokio tower and all its dependencies before my crate).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read thought this a couple times and was a bit confused about the motivation to a a newtower_service dependency the purpose of this Service impl. It seems it's literally not adding any value over calling futures ::future:::result(self.run(req)) directly. Can you add some comments describing the need for the dependency?

Sure. Tower's goal is to be the generalizable network middleware in Rust, so that it doesn't really matter what web framework you're using—if you define things in terms of Tower's Service trait, you can swap middleware & components between frameworks and applications.

To set up context for where I'm coming from. I'm a bit picky about adding dependencies to the "low level" crate because my deployments are already growing in size even with the most minimal Rust function deployments. Unless dependency provides some significant value, I feel like my function deployments are being punished both in terms of artifact size as well as ci deployment time (cargo will need to compile tokio tower and all its dependencies before my crate).

That's fair. I could take a page from @srijs' page and move this out the core crate and into an ext crate?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd vote yes for an ext crate.

I believe it was the goal of almost every rust web/network toolkit in rust thus far was to be the generalized middleware interface used across other frameworks. In practice, they all kind of lived in their only ecosystem bubbles and that didn't actually happen*. It's a idealistic goal which is wonderful, but based on history it just hasn't been successfully moved outside of ecosystem bubbles. Given how many of these kinds of projects have emerged in the past then went away as the next one came along, I don't think hardcoding one into the core of this is a great idea.

That said something that plays nice with an ecosystem's interfaces is ideal for an ext crate, something like a lambda-tower makes sense. That's the actually the intent of lambda-http, not to be its own thing but to facilitate playing nice with the ecosystem of existing http crates. If you're not working within the http crate ecosystem, you can just as easily drop down one level to the raw (de)serializable lambda events without paying extra toll booth tax. I feel like the same story could be told for tower, tide, or whatever new ecosystem bubble pops up, enabling easy integration without having a negative effect on the lambda core runtime crate.

Outside of the playing nice with ecosystems thing I think its useful to ask the question what value is it adding to the runtimes immediate needs. In this case, it looked like it just acted like a lot of machinery that was actually an indirection for a function call :) To facilitate a future based system you can just as easily use the futures crate ( futures::future::result(operation) ). In this sense I'd err only only bring in what you actually need, especially in the context of a core crate.

Mini Stawman idea: Since lambda's general contract is one task at a time and the platform brings the concurrency ( i.e. spawning more lambdas not more in process threads ), what's the drawback to exposing the tokio::runtime::Runtime ( I'm not sure what this would look like ) somehow to handlers and let handlers use that to drive responses in some async interface then have the lamba runtime await the result ( yielding errors/responses to the lambda init process ) before passing off the next message to the handler or if that were the case let handlers managing concurrency themselves and block on the results before yielding responses back to the handlers caller. In higher traffic situations it seems like it should be lambda's job to spawn more lambdas but I don't know enough about other runtimes to know if they do that or "buffer" events handlers and make sure they are executed one at a time.

* the special case here is of course the hyper crate when we talk about crates that actually have successfully permeated though ecosystem bubbles as a common denominator but that's been a very unique case thus far!

// hydrate -> process -> terminate
let f = hydrate() // Fetch the event from the runtime...
.and_then(move |event| process(event, handler)) // ...run the user-provided handler..
.then(complete) // ...and then, report on the status/failure of the Lambda invocation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you may find future.inspect more appropriate here. It's purpose is pretty aligned with your comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know about inspect()! Would it replace the .and_then call?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's essentially for logging and debugging purposes to log the state of a pipeline of operations. I think when you're calling .then(complete) you're doing just that.

Error: Fail + Display + Debug + Sync + 'static,
Handler: FnMut(Event) -> Result<Response, Error>,
{
(handler)(event)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pretty hefty abstraction to read through for a for the result of function invocation (handler)(event). What value is it adding over just calling handler(event)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's mostly due to the trait bounds. To the best of my knowledge, these trait bounds need to be repeated each time a new trait or interface is defined. I wish I could get away from it...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't try this myself but .and_then(move |event| handler(event)) not compile?

@@ -0,0 +1,19 @@
[package]
name = "lambda-runtime-server"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if this is specifically for mocking, I'd try to work that into its package name.

@softprops
Copy link
Contributor

This can probably be closed with the state of master being clean

@coltonweaver
Copy link
Contributor

I agree with @softprops. Unless there's a reason to re-open I'm going to close this.

@nmoutschen nmoutschen deleted the strawman-api branch February 1, 2022 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants