You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using Service Fabric actors for data processing in an IOT scenario and the memory usage of the service is much higher than expected. We have deployed it on the cluster using SharedProcess hosting model. Every actor service has 3 replicas.
We analyzed a process from one of the nodes - it used 1.6GB RAM. But when we checked the memory dump, we could see only 30MB used. The process hosts 275 actor services. We can see 847 total actors. Each actor has under 1KB of state. We continuously delete unused actors with DeleteActorAsync. The memory used by the process does not reduce.
We see this behavior with other actor projects as well.
Why is the service consuming so much RAM? Is there a way to free up the unused memory?
The text was updated successfully, but these errors were encountered:
We are using Service Fabric actors for data processing in an IOT scenario and the memory usage of the service is much higher than expected. We have deployed it on the cluster using SharedProcess hosting model. Every actor service has 3 replicas.
We analyzed a process from one of the nodes - it used 1.6GB RAM. But when we checked the memory dump, we could see only 30MB used. The process hosts 275 actor services. We can see 847 total actors. Each actor has under 1KB of state. We continuously delete unused actors with DeleteActorAsync. The memory used by the process does not reduce.
We see this behavior with other actor projects as well.
Why is the service consuming so much RAM? Is there a way to free up the unused memory?
The text was updated successfully, but these errors were encountered: