Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offline Messages Queues by User #1369

Closed
jonathanve opened this issue Jun 28, 2017 · 20 comments
Closed

Offline Messages Queues by User #1369

jonathanve opened this issue Jun 28, 2017 · 20 comments

Comments

@jonathanve
Copy link
Contributor

MongooseIM version: 2
Installed from: source

Hi @michalslaski and @fenek ,

Is there any friendly way to get how many offline messages are stored in the queues, by user, when using mod_offline module? We are trying to define a good max_user_offline_messages value and get to know which clients are most often offline :)

Thanks for your help!

@fenek
Copy link
Member

fenek commented Jun 28, 2017

Hi @jonathanve

It is possible but there is no convenient API to extract such information. Offline queues are stored as single records per-user in Mnesia table and in case of SQL storage - one message per row. If you need quick solution to check it once, I could prepare a snippet for you to execute in MongooseIM console. If you'd like to check it periodically... well, it involves developing a new admin function. :)

@jonathanve
Copy link
Contributor Author

Hi @fenek

The snippet would be OK. Thanks a million!

@fenek
Copy link
Member

fenek commented Jun 29, 2017

Important question: do you use Mnesia or ODBC backend for mod_offline?

@jonathanve
Copy link
Contributor Author

Mnesia @fenek

@fenek
Copy link
Member

fenek commented Jun 29, 2017

rp(lists:reverse(lists:keysort(2, lists:map(fun(Key) -> case mnesia:dirty_read(offline_msg, Key) of List when is_list(List) -> {Key, length(List)}; Err -> {Key, Err} end end, mnesia:dirty_all_keys(offline_msg))))).

You may execute it in Erlang shell. Depending on how much of spare processing power and memory your server has, it should be safe for up to tens of thousands of users. Above this threshold I can't promise anything since it is a very simple iteration over whole table. :)

@jonathanve
Copy link
Contributor Author

HI @fenek , Thanks!

I run mongooseimctl debug to start the erlang shell. When running the command I got:

** exception exit: {aborted,{no_exists,offline_msg}}
     in function  mnesia:abort/1 (mnesia.erl, line 310)

@fenek
Copy link
Member

fenek commented Jun 29, 2017

Can you please double check in ejabberd.cfg if your MongooseIM instance uses Mnesia backend for mod_offline for sure? This table may be missing only in two cases: disabled mod_offline or mod_offline storing data in SQL DB.

@jonathanve
Copy link
Contributor Author

jonathanve commented Jun 30, 2017

It is using mnesia.

{mod_offline, [{access_max_user_messages, max_user_offline_messages}]}

I am using mysql just for muc light groups ;)

@fenek
Copy link
Member

fenek commented Jun 30, 2017

Can you please paste the output of mnesia:info(). (executed in console)?

@jonathanve
Copy link
Contributor Author

Sure:

$ mongooseimctl debug

mnesia:start().
mnesia:info().
---> Processes holding locks <--- 
---> Processes waiting for locks <--- 
---> Participant transactions <--- 
---> Coordinator transactions <---
---> Uncertain transactions <--- 
---> Active tables <--- 
schema         : with 1        records occupying 414      words of mem
===> System info in version "4.12.5", debug level = none <===
opt_disc. Directory "/opt/mongooseim/rel/mongooseim/Mnesia.debug--mongooseim@mongooseim" is NOT used.
use fallback at restart = false
running db nodes   = ['debug--mongooseim@mongooseim']
stopped db nodes   = [] 
master node tables = []
remote             = []
ram_copies         = [schema]
disc_copies        = []
disc_only_copies   = []
[{'debug--mongooseim@mongooseim',ram_copies}] = [schema]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
ok

@fenek
Copy link
Member

fenek commented Jun 30, 2017

Are you using MongooseIM in a Docker container?

@jonathanve
Copy link
Contributor Author

Yes, @fenek

@fenek
Copy link
Member

fenek commented Jun 30, 2017

It seems you are the second user we know of that has some issues with debug console in Docker. We definitely need to investigate it. I'll get back to you as soon as we learn something new about it.

@fenek
Copy link
Member

fenek commented Jul 5, 2017

Hi @jonathanve

I've finally had time to check our Docker image and the debug console seems to work fine. How do you start it? I've used docker exec -i -t mongooseim-1 /usr/lib/mongooseim/bin/mongooseimctl debug.

@jonathanve
Copy link
Contributor Author

jonathanve commented Jul 5, 2017

Hi @fenek

I am executing:

sudo docker exec -i -t mongooseim /opt/mongooseim/rel/mongooseim/bin/mongooseimctl debug

When I run the snippet you provided, I still get:

** exception exit: {aborted,{no_exists,offline_msg}}
     in function  mnesia:abort/1 (mnesia.erl, line 310)

Thanks for your help!

@fenek
Copy link
Member

fenek commented Jul 6, 2017

What image tag have you used to create your container? Is it mongooseim/mongooseim image or something you've built on your own?

@jonathanve
Copy link
Contributor Author

It is an image we have built on our own. Our reference was https://github.com/esl/mongooseim-docker in order to build an image taking the open source code as the basis plus some custom features we needed in erlang for muc light groups.

@fenek
Copy link
Member

fenek commented Jul 10, 2017

OK, it looks like your custom image doesn't attach the debug process to the remote shell. Is mongooseim@mongooseim your actual MongooseIM node name inside the container?

Let's see some debug. Can you please enter container shell with sudo docker exec -i -t mongooseim "bin/bash" first, then run bash -x /opt/mongooseim/rel/mongooseim/bin/mongooseim debug and paste the output? You may skip the lines above "Press 'Enter' to continue".

@fenek
Copy link
Member

fenek commented Jul 10, 2017

My colleague, @michalwski, suggested it may be caused by container not being started with TTY enabled (-t option in Docker). Can you please check if this parameter is enabled in your deployment scripts?

@jonathanve
Copy link
Contributor Author

jonathanve commented Jul 10, 2017

@fenek indeed the -t option is not there. Only -d. In our test environment adding -d -t options the xmpp container seems to work fine and finally your snippet inside the debug console works.

Thanks @fenek for your help!

@fenek fenek closed this as completed Nov 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants