-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not Releasing Memory #48
Comments
Interesting. I will have to take a look at that this weekend or next week and see what is going on. |
Thanks @proxb |
Just out of curiosity, what version of PowerShell are you running when you get the memory leaks? |
Hi @proxb I believe I was on 4.0 at the time (was either 4.0 or 5.0). |
Weird, I run this at work (Windows 7 w/ PowerShell V4) and it tops out at ~370MB and eventually drops down to 111MB (starting was 96MB). |
Same here, memory leaks.
|
I'm trying to determine if this is a PoshRSJob issue or a PowerShell issue. I can duplicate this without the module by running this: $PowerShell=[powershell]::Create()
$RunspacePool = [runspacefactory]::CreateRunspacePool()
$RunspacePool.Open()
$PowerShell.RunspacePool = $RunspacePool
[void]$PowerShell.AddScript({
[appdomain]::GetCurrentThreadId()
$i = 0
while ($i -lt 1000) {
Set-Variable -Name "var$($i)" -Value (Get-Service)
$i++
}
})
#Begins allocating all memory
$Handle = $PowerShell.BeginInvoke()
While (-NOT $Handle.IsCompleted) {start-sleep -Milliseconds 100}
$PowerShell.EndInvoke($Handle)
$PowerShell.RunspacePool.Dispose()
$PowerShell.Dispose()
Remove-Variable PowerShell,RunspacePool
[gc]::Collect()
[gc]::WaitForPendingFinalizers()
[gc]::Collect() The memory jumps up immediately after running BeginInvoke() and sees a small release of memory after disposing the RunspacePool but is still well above the starting memory. |
Boe, I have scripts that use either normal PowerShell Jobs or PoshRSJobs with Ryan
|
I agree @ryan-leap .. I noticed the same with PowerShell jobs which is why I gave PoshRSJobs a try.. |
Thanks @ryan-leap and @MattHodge. |
Thanks for the update @proxb - sounds like an interesting problem! |
I am seeing the same issue. I noticed ISE was becoming slow and unresponsive. I checked Task Manager and noticed I had a background task consuming over 4gbs of memory.
|
I recorded a video running the same test script in If you are interested in the video drop me a line, I can upload it to youtube. Test Script: 1..10 | Start-RSJob {
Start-Sleep -Milliseconds (Get-Random -Minimum 100 -Maximum 10000)
}
Get-RSJob | Wait-RSJob -ShowProgress |
@EsOsO If you can, go ahead and upload it and throw the link in here. That is interesting with your testing on V2 and not something I looked at (mostly focused on V4/5 testing). |
Here's the link. |
Thanks! I'll check it out. |
Edit: Removed - realized my comment was unrelated. Filing a separate issue. |
I have the same issue with my own runspace module. The only time I can really get it to release the memory is doing it within the ScriptBlock but that slows down my script tremendously. Alternatively, if I run 1..3 | foreach { [System.GC]::Collect() } after the script is finished, it'll clear up immediately. There is no place that I can find to run that once within the script that it will clear the memory. I have found that $Host.Runspace.ThreadOptions = "ReuseThread" helps, but not enough. All of my testing is on 4/5. |
Yea, this is a weird issue that sometimes will go away on its own during garbage collection and other times it will drop maybe half of the memory that it has used. I wonder if the 1..3 is forcing it to go from gen0 to gen1 and to gen2 before being cleared out. I've had a memory profiler on this and the results have been interesting, but I haven't had the time to dive much more into this lately. Definitely need to come back to this and see what I can figure out. I think I am going to watch it using the 1..3 approach that you showed to see what happens. I have some wild ideas such as tracking the thread and then killing the thread itself to see how well that plays but I've read that it is a dangerous practice if you are not sure of what all the thread is doing. I found that when working with a runspacepool, the ThreadOptions are set to Default which is actually ReuseThread while a single runspace created using [runspacefactory]::CreateRunspace() will also have its ThreadOptions set to default which means it is using UseNewThread |
I had to discontinue using RoshRSJob and revert back to Invoke-Parallel. I had issues with the background jobs continually hanging from continuing, and this grew with memory usage to over 17GB. I repeated this several times. I couldn't identify what was causing the problem as the processes running inside the runspace ran inside the Powershell_ISE context, and didn't have separate threads I could identify activity on. Invoke-Parallel worked fine for me, with the automatic variable import, module/function import, etc. It did error on write-verbose, but I removed this and it's running successfully without issue. I'll definitely revisit this soon, as this is under active work, but for now Invoke-Parallel is stable for me while PoshRSJob is not working |
@sheldonhull What version of PoshRSJob are were you using and is it possible to see the code that you were using? Just looking to duplicate your issue. |
1.7.2.9 What additional information can I provide to help with reproducing? On Fri, Oct 21, 2016 at 11:55 AM Boe Prox notifications@github.com wrote:
|
@sheldonhull It would be great to see the code that you are using, if possible to help reproduce this issue as well as seeing it run with Invoke-Parallel. |
I did some more testing with this and with a starting memory usage of ~100MB, I ran the initial code and watched it go into ~@2gb before finishing at ~4GB. I waited until the Runspacepool was disposed of and then ran |
I don't know when it happened, but sometime in the last 24 hours (I left my console session open), my memory dropped down to its original levels at console startup. I'm going to do some more testing with this and see when exactly that this happens. |
What's some simple script that shows the memory problems? I would try to run it and then maybe have a look with windbg. If there is still a lot of memory, there has to be some GC root that holds all the data and prevents them from GC to be collected. |
@stej I just used the one that @MattHodge provided and it does a great job of holding some 1.5-2GB of memory on my machine prior to the 2 Import-Module PoshRSJob
1..100 | Start-RSJob -Name {$_} -ScriptBlock {
$i = 0
while ($i -lt 1000)
{
Set-Variable -Name "var$($i)" -Value (Get-Service)
$i++
}
}
# Wait for things to finish
Get-RSJob | Wait-RSJob
# Throw away the jobs
Get-RSJob | Remove-RSJob I was using Red Gate's software (http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/) to look where things are at, but my limited knowledge in this area may not be pointing to an accurate location of what is going on. I do see a lot of pscustomobject as well as [servicescontroller] data being held. I haven't actually pushed out the latest commit that includes the garbage collection yet to free up the space after a few minutes within the runspacepool cleanup, but was planning on doing so some time this week. I'll be interested in seeing your results if you get time to look at it using WinDBG. |
I added some garbage collection to the routine that cleans up the jobs that should help with the memory issues that have been reported every 2 minutes. |
I saw similar symptoms on using runspace pools and submitted the issue to PowerShell |
Ohhh your mention that it's the pools! 💯 @FriedrichWeinmann check it |
Yep, I recently redesigned my script Build-Parallel so that it uses just runspaces without pools. And the problem with leaks was solved. By the way, at least in my scenario, I do not see any significant performance impact, if any at all. But some more coding was needed, of course. |
I think that the impact on performance can only be evaluated with the rapid creation / removal of hundreds of tasks. However, throtling support will have to be done manually. |
Hi Boe,
Give these the following a try in a new PowerShell window and use task manager to keep an eye on the memory usage:
You will notice over 1gb memory usage even though the jobs have been thrown away.
If trying to clear the memory using the
[System.GC]::Collect()
command, the usage drops to around 500mb, but then does not go any lower.Any ideas how to free the memory up without restarting the powershell process?
Thanks :)
The text was updated successfully, but these errors were encountered: