Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mem widget reports wrong(?) amount of memory used and it's frozen #271

Closed
sprnza opened this issue Jan 22, 2017 · 9 comments
Closed

mem widget reports wrong(?) amount of memory used and it's frozen #271

sprnza opened this issue Jan 22, 2017 · 9 comments

Comments

@sprnza
Copy link

sprnza commented Jan 22, 2017

Hi guys!
I've found that lain.widgets.mem shows wrong (differs with htop and free -h) number of memory used and it is not being refreshed. I use this code to configure the widget.

my_mem = wibox.container.margin()
my_mem:setup {
    id = "mmr",
    widget = lain.widgets.mem(),
    widget:set_align("center"),
    widget:set_text(mem_now.used),
}

And it gets ~200Mb higher I see in htop or free.

awesome v4.0-97-g2c3aebc1 (Harder, Better, Faster, Stronger)
 • Compiled against Lua 5.3.3 (running with Lua 5.3)
 • D-Bus support: ✔
 • execinfo support: ✔
 • RandR 1.5 support: ✔
 • LGI version: 0.9.1

Lua 5.3.3  Copyright (C) 1994-2016 Lua.org, PUC-Rio
@sprnza
Copy link
Author

sprnza commented Jan 22, 2017

~150 higher to be precise

@lcpz
Copy link
Owner

lcpz commented Jan 22, 2017

First, your widget is wrong: it sets statically the mem_now.used and doesn't allow to update it. This is a correct version:

my_mem = wibox.container.margin(
    wibox.widget{
        align  = "center",
        valign = "center",
        widget = lain.widgets.mem{
            settings = function()
                widget:set_text(mem_now.used)
            end
   },
   -- margins
   --left = 1,
   --right = 1,
   --top = 1,
   --bottom = 1
})

Second, don't rely on free, because it has its particular way to compute. Let's assume *top values as the objective. Until now, the widget didn't consider page tables: try the following patch, and check the difference with htop.

--- a/mem.lua
+++ b/mem.lua
@@ -29,17 +29,18 @@
         mem_now = {}
         for line in io.lines("/proc/meminfo") do
             for k, v in string.gmatch(line, "([%a]+):[%s]+([%d]+).+") do
                if     k == "MemTotal"  then mem_now.total = math.floor(v / 1024)
                elseif k == "MemFree"   then mem_now.free  = math.floor(v / 1024)
                elseif k == "Buffers"   then mem_now.buf   = math.floor(v / 1024)
                elseif k == "Cached"    then mem_now.cache = math.floor(v / 1024)
                elseif k == "SwapTotal" then mem_now.swap  = math.floor(v / 1024)
                elseif k == "SwapFree"  then mem_now.swapf = math.floor(v / 1024)
+               elseif k == "PageTables" then mem_now.paget = math.floor(v / 1024)
                end
             end
         end
 
-        mem_now.used = mem_now.total - (mem_now.free + mem_now.buf + mem_now.cache)
+        mem_now.used = mem_now.total - mem_now.free - mem_now.buf - mem_now.cache - mem_now.paget
         mem_now.swapused = mem_now.swap - mem_now.swapf
         mem_now.perc = math.floor(mem_now.used / mem_now.total * 100)

@sprnza
Copy link
Author

sprnza commented Jan 22, 2017

Hey! Thanks for the code! It was a stupid mistake.
Regarding the patch. After applying it I see the page tables is around 15MB (or MiB? it's not clear from the wiki). So the result is still sitting ~130M(i)B higher htop.

@sprnza
Copy link
Author

sprnza commented Jan 22, 2017

Cached+Buffer from /proc/meminfo != buff/cached from top for some reason

@sprnza
Copy link
Author

sprnza commented Jan 22, 2017

I used SReclaimable instead of PageTables. Also top uses roundup instead of rounddown (as it implemented in mem.lua)
From top's sources (is it roundup?)

/* useful macros */
#define bytetok(x)  (((x) + 512) >> 10)

Considering these changes in mem.lua the widget shows the same numbers with top and they are in MiB. htop uses MB so they differs.

@lcpz lcpz closed this as completed in 23318f8 Jan 23, 2017
@lcpz
Copy link
Owner

lcpz commented Jan 23, 2017

We're now aligned to top and free -h. Thanks for your investigation.

@psychon
Copy link

psychon commented Jan 24, 2017

From top's sources (is it roundup?)
#define bytetok(x) (((x) + 512) >> 10)

Right shift by 10 can be understood as integer division by 2^10. So the above does: math.floor((x + 512) / 1024) (the floor is implicit since C truncates on division and x likely is not negative). Put differently: It does round-to-nearest.
Perhaps it is more easily readable when written like this: math.floor(x / 1024 + 0.5) (this would not work in C since x is an integer and so the division would already truncate). I think this version is easier to understand.

Oh and: If the goal is to match top's behaviour, then 23318f8 is still wrong, since it rounds up instead of to-nearest.

/me was just randomly browsing around and leaves again...

@psychon
Copy link

psychon commented Jan 24, 2017

Or even one more alternative: awful.util.round(x / 1024). (this function is implemented as just floor(x+0.5), but hopefully helps readability)

lcpz pushed a commit that referenced this issue Jan 24, 2017
shmilee added a commit to shmilee/awesome-away that referenced this issue May 22, 2024
Cached includes tmpfs & shmem.

ref:
1. neofetch get_memory()
2. kernel Documentation/filesystems/proc.rst
3. lcpz/lain#271
@shmilee
Copy link

shmilee commented May 22, 2024

As Cached includes tmpfs & shmem, see proc.rst, so after - mem_now.cache, we may need to add Shmem back.

Cached
In-memory cache for files read from the disk (the
pagecache) as well as tmpfs & shmem.
Doesn't include SwapCached.

Just like https://github.com/dylanaraps/neofetch/blob/ccd5d9f52609bbdcd5d8fa78c4fdb0f12954125f/neofetch#L2676

After adding Shmem, used is almost the same as htop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants