Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After execution complete, RDS connections staying open #798

Closed
Ryanauger95 opened this issue Aug 31, 2019 · 8 comments
Closed

After execution complete, RDS connections staying open #798

Ryanauger95 opened this issue Aug 31, 2019 · 8 comments

Comments

@Ryanauger95
Copy link

Ryanauger95 commented Aug 31, 2019

I am running an RDS MySQL server on a t2.micro. This instance allows 65 concurrent connections. Running lambda functions using invoke local, or actually deploying them, the connections close immediately after running. Using sls-offline, they stay open. Each function invocation will open a new connection, and these connections are never closed.

Here is my RDS monitor - the large ramp ups are my uses of serverless ofline. The sharp drops are every time I stop serverless-offline with Ctrl-c. The relevant insight from this graph is that the # of connections never goes down until sls-offline is exited.
Screen Shot 2019-08-31 at 2 24 54 PM

@dnalborczyk
Copy link
Collaborator

hey @piedpieper

which version of serverless-offline are using? and with which flags? with which language? where are you opening your sql connection? do you have some sample handler code focusing on the sql connection?

@dnalborczyk
Copy link
Collaborator

generally speaking, this plugin is not responsible for your code, it essentially just runs it. that being said, there might be some issues with certain patterns regarding db connections and pooling, e.g. I think I remember seeing some issues with require.cache reloading on file change causing problems like this.

like I said, it would be good to get to the root of this once and for all if you could provide some sample handler code - or just some pseudo code, I just need to know where and how you are opening your db connections.

@Ryanauger95
Copy link
Author

Version 5.10.1, no flags, node.js. Opening the SQL connection using knex:

const knex = Knex({
  client: "mysql",
  connection: {
    host: Envvar.string("AWS_MYSQL_HOST"),
    user: Envvar.string("AWS_MYSQL_USERNAME"),
    password: Envvar.string("AWS_MYSQL_PASSWORD"),
    database: Envvar.string("AWS_MYSQL_DBNAME")
  },
  pool: { min: 1, max: 1 }
});

@dnalborczyk
Copy link
Collaborator

dnalborczyk commented Aug 31, 2019

thank you! one more question: are you running the above inside of the handler function or outside (in module scope)?

I forgot, in addition: does the pool deplete with just hitting the endpoint (without changing the handler file), or only when you change the handler file, hit the endpoint, change the file, hit the endpoint and so on?

@Ryanauger95
Copy link
Author

Im running it outside, in module scope. I import the connection into my handler.

It depletes with just hitting the endpoint, without changing the handler file

@dnalborczyk
Copy link
Collaborator

thanks @piedpieper I'll have a look and try to reproduce. in the meanwhile, could you try the latest v6 alpha to see if it fixes your issue?

@dnalborczyk
Copy link
Collaborator

dnalborczyk commented Sep 1, 2019

@piedpieper

I found the culprit, although I couldn't quite reproduce it completely because when hammering an endpoint, A LOT of connections have been established, BUT (in my case) also eventually being released.

I also used pg with Postgres as opposed to knex with mysql. it seems that in v5.10 cache invalidation mechanism causes this. every request is essentially destroying the handler module cache and reloads it. that causes the pool to be destroyed and it had to be re-established with each request.

for now, use --skipCacheInvalidation (set as cli flag or in your serverless.yml). when I used this flag, I saw only 1 open connection.

Now, when I tried the same in v6 alpha, I also saw only 1 connection open. Reason is, that we are keeping the handler running now, similar to what Lambda does (and the cache invalidator is currently also not being used I believe, although I have to double check as v6 is still a work in progress).

my plan is to remove --skipCacheInvalidation in v6 and make it the default setting, as it caused endless problems for people not being aware of this behavior, me included when I started using this plugin.

could you let me know if --skipCacheInvalidation works for you? or better, try v6 alpha, which should definitely have a closer AWS emulation than what's currently in v5.

dnalborczyk added a commit that referenced this issue Sep 2, 2019
…alidationRegex options, fixes #766, fixes #798, and plenty others
@Ryanauger95
Copy link
Author

@dnalborczyk --skipCacheInvalidation works for me !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants