-
Notifications
You must be signed in to change notification settings - Fork 39
Cawbird consumes a lot of memory over time #142
Comments
Possible candidate cause (but I'm just starting to learn C/memory analysis tools):
|
I noticed this as well. Does cawbird delete posts (and more important: their media attachments) from memory overtime and during scrolling? |
Mastiff suggested media downloading is the problem, so log some details about what it's doing and when we free memory
It should delete them, and from a quick bit of debugging then I can see that it is disposing of media when it removes tweets (so scrolling back would use lots of memory as it loads tweets and images, but get to the top and it'll purge the old ones and free the memory). The media for the thumbnails does appear to be coming out at up to 1200×1200, though, so scroll back a few hours past a load of pictures and those thumbnails could quickly consume quite a bit of memory. This probably wouldn't be so much of a problem if we weren't doing stuff in C! I'm just starting on some debugging code to understand what it's doing. |
Trying to work out whether we are actually using a lot of memory, or whether VSZ is just a misleading number and the OS would reclaim a load if it needed to. Progression of
Given that we're potentially (in my case) loading dozens of 1200x1200px images and holding that it the For reference,
So I'm nowhere near needing the OS to collect any free'd memory. |
Yeah, I think we might be freeing memory, and we just need to do something more sensible with images. Steps:
So it has pruned ~160MB of RSS, but VSZ has remained the same. Which suggests it was free'ing the media correctly, but VSZ doesn't instantly come down? |
Interestingly, I just came across this article, which has a different command to the basic process listing that I'd been using above:
Comparing the two, I get: So the virtual size is much closer to the resident size. Although this other article suggests that VSZ is everything that has been allocated, not just looked at. Which leaves me wondering if there's something we should be doing to make it get un-allocated. I hate low-level languages and memory management! |
Just ran I think this is to do with thumbnails an unnecessary scaling rather than any kind of memory leaks. So, the solution is simple. Stop showing images! Or I could do something better with the thumbnails and caching and see if that helps. A quick hack that I did the other night affected the displayed image size as well. |
Does it help? It shoots up to ~>500 Mb in just 3 srolls and I'm hiding images in my timeline. Scrolling and loading new tweets is the trigger. Will running Valgrind help? |
I've had a quick look at the code and it looks like it downloads images and stores them as objects in memory regardless of whether you have images visible in the timeline or not. That's good for responsiveness and consistency (if you turn them on then you won't be waiting for them to be downloaded) but not for memory usage. The suggestion of stopping showing images was a non-serious solution of fixing memory usage by just not showing images at all. What metric are you using for ">500MB"? There's a couple of options in the previous comments that very rarely get to that level for me, even after long runs. And my first comment after the initial report is valgrind output. |
I ran memory profiler and plotter, here's a graph, I made 3 full scrolls,opened 2 tweets one with picture one without,photos were hidden in timeline. If I leave it like this it goes around 570-600 Mb in task manager which is more than my browser with two tabs open. Is this normal? I missed that output. |
But which version of "memory usage" is that using? RSS (resident set size - current actual memory usage) or VSZ (virtual size - effectively the potential memory usage, including stuff it's not using). All of which is made to look worse by the fact that those numbers may include the size of all of the shared libraries (gtk, etc), which then get double-counted across the system. At least that's what I understand from a lot of reading around. Memory usage isn't a simple "this app is using this amount of my memory" thing. As for what's "normal", whatever it does now is normal for the way it is written. Valgrind isn't showing any memory leaks, so we're using exactly what GTK/libsoup/various other libraries need to use. So the only way to get better (lower) is to improve what we hold in memory. |
On the "shared libraries" front, Firefox links the following:
In contrast, Cawbird links:
That's a lot more shared libraries, so we'll always be showing as using a lot of memory in some ways. But which of those do we specifically depend on?
From there, we have:
But 90% of those are probably used by at least one other app. So while they show up in the RSS and VSZ values and we'd be using that much memory on our own if we were the only app running, the reality isn't anywhere near that simple. So, something like Firefox might look like it's not using as much RAM, but a good chunk of that is probably because it's not pulling in quite so many libraries that pull in other libraries that pull in…. But those are shared libraries, so it's not just our memory. (But I think we still have other issues around image handling that could be done better) |
It actually reports RSS size but it is optimised for python based applications and uses different backend depending on OS. They don't have a lot of documentation on the project page regarding this. I tried to use Valgrind to report memory usage but according to online sources it slows down the process, so Cawbird started lagging and thus any memory usage reported by it will not be accurate.
Yeah I was reading that https://utcc.utoronto.ca/~cks/space/blog/linux/LinuxMemoryStats, there isn't a way to isolate actual memory used by the process and by its shared libraries? And RSS/VSZ overcounts memory since they can include double entries of code for a single process is using along with other shared things. |
We were loading "medium" (1000px). Now loading "small" (680px), which should still be enough for most people. This cuts about 1/3 off my memory usage with my current timeline - 317648kiB to 215864kiB measured by `/usr/bin/time -f "RSS: %MkiB" cawbird` (different timelines with different images will get different values) This also currently reduces the display size when clicked. That will be fixed separately.
We currently assume "large" is actual size. Thumbnail images are scaled to full size as a starter. We will then download the real one and replace it later (like HTML's "lowsrc" attribute)
If the image is already right (image isn't bigger than "small" size) then don't waste time creating a scaled clone surface.
Scaling doesn't work when you've got the wrong dimensions!
Corebird always loaded large size and shrunk it, which used lots of memory. We now load "small" for the thumbnail and only load full-res when the user requests it. This includes some cool "lowsrc" trickery where we first show the user the thumbnail scaled up while loading the high-res in the background. On a fast Internet connection, this is barely distinguishable from just showing the full-res image.
Currently has a slight drawback, but only for a minority of users.
This will stop us wasting memory & bandwidth on: * Images in notifications/favs we never look at * Images when the media is hidden
While we could load "orig", it breaks our "show the thumbnail scaled and then replace it" approach if the sizes differ. Orig is still available through "open in browser".
Rather than wait until we display each one, start async loading all images when the media dialog gets created so that they're ready when the user clicks "Next"
Latest numbers after the updates to how we load images, based on "run Cawbird with
These numbers are just representative figures. They'll vary depending on your latest timeline and how many images there are. |
This prevents warnings when we're already checking the values, presumably because of race conditions.
The loader shouldn't decide whether special cases are met. Also, a helper function means that you don't have to understand logic of loading hires texture to get the right suface
Mastiff suggested media downloading is the problem, so log some details about what it's doing and when we free memory
We were loading "medium" (1000px). Now loading "small" (680px), which should still be enough for most people. This cuts about 1/3 off my memory usage with my current timeline - 317648kiB to 215864kiB measured by `/usr/bin/time -f "RSS: %MkiB" cawbird` (different timelines with different images will get different values) This also currently reduces the display size when clicked. That will be fixed separately.
We currently assume "large" is actual size. Thumbnail images are scaled to full size as a starter. We will then download the real one and replace it later (like HTML's "lowsrc" attribute)
If the image is already right (image isn't bigger than "small" size) then don't waste time creating a scaled clone surface.
Scaling doesn't work when you've got the wrong dimensions!
Corebird always loaded large size and shrunk it, which used lots of memory. We now load "small" for the thumbnail and only load full-res when the user requests it. This includes some cool "lowsrc" trickery where we first show the user the thumbnail scaled up while loading the high-res in the background. On a fast Internet connection, this is barely distinguishable from just showing the full-res image.
Currently has a slight drawback, but only for a minority of users.
This will stop us wasting memory & bandwidth on: * Images in notifications/favs we never look at * Images when the media is hidden
While we could load "orig", it breaks our "show the thumbnail scaled and then replace it" approach if the sizes differ. Orig is still available through "open in browser".
Rather than wait until we display each one, start async loading all images when the media dialog gets created so that they're ready when the user clicks "Next"
This prevents warnings when we're already checking the values, presumably because of race conditions.
The loader shouldn't decide whether special cases are met. Also, a helper function means that you don't have to understand logic of loading hires texture to get the right suface
Okay, that should be a good bit of optimisation. We'll still slowly eat memory if people have "scroll on new tweets" disabled and don't look at it for hours on end, but there's not much we can do about that without getting far more complex (and less responsive for many people, and prone to issues) with "load as it comes in to view" behaviour. |
Describe the bug
It's not noticeable (at least not on my machine) but if you leave Cawbird running then it'll slowly consume more and more memory.
This isn't just to do with new tweets stacking up when it isn't auto-scrolling, because scrolling to the top and waiting for it to clear the older tweets doesn't reduce the memory usage by much.
To Reproduce
Steps to reproduce the behavior:
ps aux | grep "[c]awbird"
(column 4 is system RAM percentage, 5 is absolute amount)Expected behavior
At step 3, Cawbird is using some extra memory compared to startup, but not much.
At step 5, it's back down to startup levels.
** Actual behaviour **
At step 3:
ibboard 4665 0.6 4.1 1612168 1015320 ? Sl 10:36 1:33 /usr/bin/cawbird --gapplication-service
At step 5:
ibboard 4665 0.7 3.9 1505916 965580 ? Sl 10:36 2:02 /usr/bin/cawbird --gapplication-service
(Values after ~5hrs running)
System details:
Additional context
If anyone knows about understanding memory usage of an app (beyond running a basic Valgrind and seeing if it reports memory leaks) then please help!
The text was updated successfully, but these errors were encountered: