-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Bug Report: Techdocs Memory Leak #27347
Comments
Hi @pcgqueiroz, can you share your full Backstage details by running We ship the Postgres Search Engine when creating a new Backstage instance as of a few releases ago to help avoid this as we also recommend using Postgres as your database for Production. I would highly suggest using this. 👍 |
Hi @awanlin, I am already using the Postgres Search Engine Also, the search engine backend and their modules are in a different pod (which is not leaking memory), since my backstage in production runs in a micro-services architecture. The pod that is leaking memory has only the Here is my configuration: OS: Linux 6.8.0-47-generic - linux/x64 Dependencies: |
Awesome, that at least eliminates that as a potential problem 👍 |
Hey there! I happened to be investigating something similar and happened upon this thread. @pcgqueiroz, I believe the problem you're experiencing may be specific to the AWS publisher implementation, related to this line in the TechDocs awsS3 publisher. Other implementations of the docsRouter will pipe the stream returned from the object store service (e.g. here in the GCS implementation), but the AWS implementation seems to be loading the entire file contents into memory before sending it on to clients. I unfortunately don't have the bandwidth (nor a nice/easy test environment) to fix this, but my assumption is that removing the |
Hey Sydney,
I don´t have techdocs cache set-up. Please tell me if you have any other
questions so you can replicate the issue.
…On Tue, Nov 12, 2024 at 3:21 PM Sydney Achinger ***@***.***> wrote:
Hi, do you have TechDocs Cache set-up? I'm trying to replicate this issue.
—
Reply to this email directly, view it on GitHub
<#27347 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKHUCSKGZ5GDHCOF7AWWGK32AIFH5AVCNFSM6AAAAABQST5KPKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINZQGY3DOMRSGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hey @iamEAP, I have tried to make the changes you have suggested - res.send(await streamToBuffer(resp.Body as Readable));
+ const fileStream = resp.Body as Readable;
+ fileStream.pipe(res); The code worked but it did not fix the memory leak. Any other ideas that I could test? Thanks |
Memory Profiling Snapshot (1 Week)The following memory profiling snapshot from DataDog indicates a potential memory leak: It appears that Currently, Backstage is using an older version ( |
Hi @pcgqueiroz and @Ferin79! @Ferin79 Thanks for pointing out the need to upgrade |
I think this could be related to the |
Hi @squid-ney, thanks for looking at this issue. Answering your question, when I replaced the buffer by the pipe, the code did work but it didn't solve the memory leak problem. Sorry that I was not enough clear in my comment above. In my tests, I have the same feeling that the issue is related to the |
Do you happen to have the permission framework enabled? There's one more code path in TechDocs that will cause |
Hey @iamEAP , I do not know exactly what you mean by having the permission framework enabled, but I do have the The |
Yes, this wouldn't respect This check could happen frequently (because it would be called not just for the HTML content of a page, but also all images/assets loaded on that page), so the result gets cached for a brief moment. All of that to say: don't rule the |
As far as I could trace in my Unfortunately I do not think this is related :-( |
@pcgqueiroz The package Also the |
Hey @pcgqueiroz a few questions about your TechDocs set-up.
|
Hi @Ferin79, my environment is setup in micro-services, and I have one dedicated pod to run the |
Hi @squid-ney, answering your first question, using an "aws s3-type local storage" means that I am using I am not setting a custom |
📜 Description
The
techdocs-backend
plugin is experiencing a memory leak when integrated with thesearch-backend-module-techdocs
. In a production setup with a dedicated pod fortechdocs-backend
, the/static/docs
endpoint is queried by thesearch-backend-module-techdocs
every 10 minutes. With over 3000 entities to process, memory consumption increases progressively until the JavaScript heap is exhausted, resulting in a crash.👍 Expected behavior
The
techdocs-backend
plugin should handle repeated/static/docs
queries and large volumes of entities without excessive memory consumption or memory leaks, allowing stable operation in production.👎 Actual Behavior with Screenshots
The memory usage of the
techdocs-backend
plugin gradually increases each time it processes requests from thesearch-backend-module-techdocs
, eventually causing the JavaScript heap to overflow and crash.👟 Reproduction steps
techdocs-backend
in a microservices setup with a dedicated pod.search-backend-module-techdocs
on/static/docs
.techdocs-backend
processes a large set of entities (over 3000, with some missing documentation in S3).📃 Provide the context for the Bug.
No response
🖥️ Your Environment
Backstage version: 1.32.3
Node.js version: v18.18.0
Setup: Microservices with a dedicated pod for
techdocs-backend
TechDocs Configuration: Local builder and generator, publishing files to a local AWS S3-type bucket
👀 Have you spent some time to check if this bug has been raised before?
🏢 Have you read the Code of Conduct?
Are you willing to submit PR?
No, but I'm happy to collaborate on a PR with someone else
The text was updated successfully, but these errors were encountered: