Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak detected for publish message method. #1069

Closed
ibmappcon opened this issue Jul 24, 2020 · 15 comments
Closed

Memory leak detected for publish message method. #1069

ibmappcon opened this issue Jul 24, 2020 · 15 comments
Assignees
Labels
api: pubsub Issues related to the googleapis/nodejs-pubsub API. priority: p2 Moderately-important priority. Fix may not be included in next release. 🚨 This issue needs some love. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@ibmappcon
Copy link

ibmappcon commented Jul 24, 2020

Hi,

Encountered a memory leak with pubsubclient.publish method. We tried load testing for publish message operation and found the following results:

For 100 requests, an increase of (0.1-0.2)MB.
For 1000 requests, an increase of (0.6-0.7)MB.
For 5000 requests, an increase of around 3MB.

These results continue to give an increase in memory, with increase in number of requests.

Node version - 10.16.3

@product-auto-label product-auto-label bot added the api: pubsub Issues related to the googleapis/nodejs-pubsub API. label Jul 24, 2020
@yoshi-automation yoshi-automation added the triage me I really want to be triaged. label Jul 24, 2020
@kamalaboulhosn
Copy link
Contributor

@ibmappcon Can you please provide the code you are using that demonstrates the memory leak?

@ibmappcon
Copy link
Author

ibmappcon commented Jul 26, 2020

@kamalaboulhosn
A brief background what we are trying to achieve:
We are building a Multi tenant node js application where user can come in with their service account credentials and use our app to interact with Google pub sub for integration.

The code to publish message is some what like this:

const { PubSub } = require('@google-cloud/pubsub')


function calledOnEachIncomingRequest() {
    const promise = new Promise((resolve, reject) => {
        let pubSubClient = new PubSub({
            credentials: {
                client_email: 'xxxxxserviceaccountemail',
                private_key: '{Redacted}'
            },
            projectId: 'projectId'
        })

        const topicInstance = pubSubClient.topic('loadTest2')
        const dataBuffer = Buffer.from('any1234')
        const attributes = {}
        topicInstance.publish(dataBuffer, attributes, (error, messageId) => {
            console.log('...................', error, messageId)
            if (error) {
                console.log(error)
                    reject(error)
            } else {
                topicInstance.flush(() => {
                    newclient.close().then(resolve(messageId))
                })
            }
        })
    })
}

calledOnEachIncomingRequest()

We also tried caching and keeping one client only for each user instead of creating it at runtime, the problem with that approach we are seeing is we get this error for publish operation after few concurrent requests:

[2432]: c:\ws\src\node_http2.cc:893: Assertion `(flags_ & SESSION_STATE_READING_STOPPED) != (0)' failed. 1: 00007FF6ABD5DD8A v8::internal::GCIdleTimeHandler::GCIdleTimeHandler+4506 2: 00007FF6ABD38886 node::MakeCallback+4534 3: 00007FF6ABD3893F node::MakeCallback+4719 4: 00007FF6ABCDCA35 EVP_CIPHER_CTX_get_cipher_data+114517 5: 00007FF6ABCE0BD1 EVP_CIPHER_CTX_get_cipher_data+131313 6: 00007FF6ABC26D76 DH_get0_q+14022 7: 00007FF6ABC24AD8 DH_get0_q+5160 8: 00007FF6ABC27D61 DH_get0_q+18097 9: 00007FF6ABC810B1 RSA_meth_get_flags+753 10: 00007FF6ABD8E203 uv_tcp_getpeername+1315 11: 00007FF6ABDA1457 uv_dlerror+2007 12: 00007FF6ABDA23E8 uv_run+232 13: 00007FF6ABD3FE7E node::NewContext+1390 14: 00007FF6ABD4048B node::NewIsolate+603 15: 00007FF6ABD408E7 node::Start+823 16: 00007FF6ABBEF3CC node::MultiIsolatePlatform::MultiIsolatePlatform+604 17: 00007FF6AC83863C v8::internal::compiler::OperationTyper::ToBoolean+129516 18: 00007FFB163F7BD4 BaseThreadInitThunk+20 19: 00007FFB1654CE51 RtlUserThreadStart+33

In both the approaches the leak is evident because as we stub the .publish call with some dummy async function the memory is getting reclaimed properly.

Problem statement:

  1. How can we use single cached client for multiple concurrent calls without introducing any memory leak ?
  2. In case 1 is not possible, how can we create client on runtime and exit it without leaking any memory ?

Please note we have a specific requirement where we want to publish only one message at a time,

Thanks!

@ibmappcon
Copy link
Author

@kamalaboulhosn any updates on this will really help.

@yoshi-automation yoshi-automation added the 🚨 This issue needs some love. label Jul 29, 2020
@feywind
Copy link
Collaborator

feywind commented Jul 30, 2020

I'll put this on my agenda for today.

@feywind feywind added priority: p2 Moderately-important priority. Fix may not be included in next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. and removed 🚨 This issue needs some love. triage me I really want to be triaged. labels Jul 30, 2020
@feywind
Copy link
Collaborator

feywind commented Jul 31, 2020

Hi @ibmappcon,

I dug into this a bit today. I am indeed able to see the memory leak issue, and it appears to have something to do with the code we are generating for protos. I'd like to look at that a bit more, because it seems like we shouldn't be leaking things like that.

But my take on it (and after consulting with my colleagues, they agree) is that it's not great to create a PubSub object per request. There is a lot of data that is loaded from disk when that happens, and it's also going to cause problems with the authentication service eventually, because that downloaded info is cached across PubSub object creations.

Tomorrow I'm going to see if I can reproduce the multiple cached clients issue. Eventually that might still end up with the same issue, though, if you have thousands of users, or if you end up with thousands of Pub/Sub server sockets open.

@stephenplusplus
Copy link
Contributor

@feywind where does this one stand? Do you think our code somewhere is retaining objects in memory unnecessarily, or is this a matter of best practices?

@PatentLobster
Copy link

Im having the same issue:

process.env.GRPC_VERBOSITY = 'DEBUG';
const {PubSub} = require('@google-cloud/pubsub');


exports.EventPostPubSub = functions.runWith(runtimeOpts).https
    .onRequest(async (request, response) =>
    {
        try
        {
            const pubSubClient = new PubSub({
                // Sending messages to the same region ensures they are received in order
                // even when multiple publishers are used.
                apiEndpoint: 'us-central1-pubsub.googleapis.com:443',
            });

            const data = JSON.stringify(request.body);
            const dataBuffer = Buffer.from(data);

            const messageId = await pubSubClient.topic('event').publish(dataBuffer);
            response.send("ok "+messageId);
        }
        catch (e) {
            console.error(e) ;
            response.send("error");
        }

    });

Uploading image.png…

getting very high memory usage.

          "version": "2.9.2",
          "resolved": "https://registry.npmjs.org/google-gax/-/google-gax-2.9.2.tgz",
          "integrity": "sha512-Pve4osEzNKpBZqFXMfGKBbKCtgnHpUe5IQMh5Ou+Xtg8nLcba94L3gF0xgM5phMdGRRqJn0SMjcuEVmOYu7EBg==",
          "requires": {
            "@grpc/grpc-js": "~1.1.1",
            "@grpc/proto-loader": "^0.5.1",
            "@types/long": "^4.0.0",
            "abort-controller": "^3.0.0",
            "duplexify": "^4.0.0",
            "google-auth-library": "^6.1.3",
            "is-stream-ended": "^0.1.4",
            "node-fetch": "^2.6.1",
            "protobufjs": "^6.9.0",
            "retry-request": "^4.0.0"
          },

@PatentLobster
Copy link

I can confirm using the new pubsub client declaration outside of the function did the fix :)

@yoshi-automation yoshi-automation added the 🚨 This issue needs some love. label Jan 20, 2021
@feywind
Copy link
Collaborator

feywind commented Feb 12, 2021

I haven't been able to dig into this more since the last post, but I'm still under the belief that there's something leaking from the generated gax API calls. At least when I looked at it in the profiler, most of the growing memory usage was strings with generated proto code. If keeping around the same PubSub fixes it, then I'm happy to let it go for now, but I'd also be interested to hear what @alexander-fenster thinks. (Feel free to re-open as needed.)

@feywind feywind closed this as completed Feb 12, 2021
@lox
Copy link

lox commented Apr 3, 2024

Came across this issue after trying to track down a memory leak in our Firebase Functions that leaked when doing not much else other than publishing events. Just moving the creation of the pubsub client to outside the function vs creating a new one fixed our problem too.

Is this potentially worth re-opening @feywind? Surely it's reproducible pretty trivially?

@vibhor1997a
Copy link

+1 @lox. I'm able to recreate it using the below snippet.

Minimal example to recreate this.

const dotenv = require('dotenv');
const { randomSync } = require('ksuid')
dotenv.config();

const { PubSub } = require('@google-cloud/pubsub');

const c1 = new PubSub({
    projectId: process.env.PUBSUB_PROJECT_ID,
    emulatorMode: true,
});

async function main() {
    const topicId = randomSync().string.slice(1, 10)
    console.log('topic is', topicId)
    await c1.createTopic(topicId);
    const sub = await c1.createSubscription(topicId, 'sub-' + topicId);
    sub[0].on('message', (msg) => {
        console.log(msg.data.toString())
        msg.ack()
    });

    for (let i = 0; i < 10000; i++) {
        const client = new PubSub({
            projectId: process.env.PUBSUB_PROJECT_ID,
            emulatorMode: true,
        });
        const topic = client.topic(topicId)
        await topic.publishMessage({
            json: {
                'val': i
            }
        });
        console.log(i)  
    }

}

main()
    .then(() => {
        console.log('done');
    }).catch(err => {
        console.log(err)
    })

Using global pub-sub outside is definitely solving this.

@feywind
Copy link
Collaborator

feywind commented Nov 5, 2024

Thanks for adding the code. Yeah, the intended usage is to have the new PubSub somewhere global, one per application, ideally. There are a lot of things wrapped up in there (and in client.topic) that need to be cached for it to work efficiently. I've been modifying samples lately to try to emphasize that intended usage more.

@vibhor1997a
Copy link

@feywind understood. Would you have any thoughts on adding this note to the docs?

@feywind
Copy link
Collaborator

feywind commented Nov 8, 2024

The particular issue is mostly Node-specific, because it's the only language (as far as I know) that has this kind of "big object" approach to the API. So I'm not sure it would make sense to put it into the shared docs. Other languages use terminology like "subscriber" and "publisher" for the objects, which make it a bit more obvious that there is machinery in there. We want to make this change for Node, too, but it's kind of gotten deprioritized.

In the meantime, here is the sort of change I've been making to samples:

// Creates a client; cache this for further use

Just to kind of pull the instantiation out and put an explicit comment.

@vibhor1997a
Copy link

vibhor1997a commented Nov 9, 2024

Hey @feywind,

Maybe we can add the same comment in readme?

My opinion is developers using this package would generally go through Reafme example first and only go through samples if they really need to look into something in specific which is not straight forward enough.

Let me know if it makes sense.

feywind pushed a commit to feywind/nodejs-pubsub that referenced this issue Nov 12, 2024
🤖 I have created a release *beep* *boop*
---


## [3.11.0](googleapis/nodejs-bigtable@v3.10.0...v3.11.0) (2022-04-13)


### Features

* send retry attempt header to ease debugging ([googleapis#1068](googleapis/nodejs-bigtable#1068)) ([37f9b3c](googleapis/nodejs-bigtable@37f9b3c))

---
This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: pubsub Issues related to the googleapis/nodejs-pubsub API. priority: p2 Moderately-important priority. Fix may not be included in next release. 🚨 This issue needs some love. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

No branches or pull requests

8 participants