-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many jobs sometimes #28
Comments
@elderbig I'm here, my friend. Let me break it down for you You are using two special apis
So you have a situation here where the task is cancelled as soon as it executes, so you don't see the output. If you don't want to manipulate task instances in real time, just use |
Use add_task,it is run fast more.like this: let s = "echo 1".into();
delay_timer.add_task(build_task_async_execute_process(i,&s)?)?; |
Yes, it is nanosecond level speed, hope it helps you. :) |
but I ment another problem,when I run 100 task,every task run 1 time in 1 second,when it run,the memory rise step by step, after about 2 hours,it rise from 8M upto 180M,if I use 1000 task, soon it will use 500M memory. I think the more task and longer time running,the more memory will be used,it may be some thing wrong |
I understand what you mean, I have two questions
You could try calling |
Thanks for your response,this is my code: #[async_std::main]
async fn main() -> Result<()>{
log4rs::init_file("conf/log4rs.yaml", Default::default()).unwrap();
info!("begin");
let delay_timer = DelayTimerBuilder::default().build();
for i in 0..1000{
delay_timer.add_task(build_task_async_execute_process(i)?)?;
info!("init task id = [{}]", i);
}
info!("==== All job is be init! ====");
for _ in 0..120 {
thread::sleep(Duration::from_secs(60));
}
Ok(delay_timer.stop_delay_timer()?)
}
fn build_task_async_execute_process(task_id:u64) -> Result<Task, TaskError> {
let mut task_builder = TaskBuilder::default();
let body = unblock_process_task_fn("echo hello".into());
task_builder
.set_frequency_repeated_by_cron_str("* * * * * *".into())
.set_task_id(task_id)
.set_maximum_running_time(10)
.set_maximum_parallel_runnable_num(1)
.spawn(body)
} this time, I run 1000 jobs, so I can see more memory usage.It run for 2 hours,and memory rise to 2.7G,how can I do? and what mean about |
Thanks to your @elderbig demand, I made an optimization. Version 0.10.0ChangedOptimized the use of internal DetailsThere is a The time wheel uses slots (time scales) as units, each slot corresponds to a hash table, when a slot is rotated to it will execute the task that is already ready internally, when the task is executed it will move from one slot to another. In order to have enough capacity to store the tasks, there may be a memory allocation here, so that by the time the whole time wheel is traversed, each internal time wheel-slot will have rich memory capacity, and when there are many tasks the memory occupied by the whole time wheel will be very large. So it will be necessary to shrink the memory in time. This change is to shrink the memory in time after each round of training slots and executing tasks to ensure that the slots have a basic and compact capacity. You can now use the latest version of |
I test this version(0.10.0),the problem is reduced,in 2 hours test with 1000 jobs,memory used by my progress rise up from 8M to 319M,it seems come better,but memory is still rise slowly,Can it to be better? this my test project https://github.com/elderbig/delay_timer-test |
I test once more, in 5.8 hours,memory rise up to 650M, it seems memory rise up always when program running,bu increase become slowly by slowly,and cpu used in my 12cores computer,top like this:
dt_test is the test pocess,and 100 jobs will cost 1.7% cpu total. |
Hi friend, I noticed your feedback that the program generates more and more buffering during operation, and now it has got 75% - 80% memory optimization. For after OS optimization, all remaining buffers should now be planned by OS rationally (physical memory, with swap area), as shown on your terminal only 0.1% of physical memory overhead. Actually I can optimize it further if you need, but it may affect 10% of the performance of the internal scheduler, but it still guarantees tens of thousands of tasks per second to be scheduled. |
Thank for your reply.In my submission in fact, this crate is good enough. but might be possible there is some memory leak? I am beginner in rust and not able to judge this question now. |
My friend I understand your concern, you are an engineering meticulous person. The crate has not crashed or leaked in our production environment, and the actual consumption is very low, the mem metrics have not exceeded 0.3% in almost 10 months of deployment. Also, I personally have an AWS server with 1 core and 1 G, which has been deployed for 3 months (never restarted) without any abnormalities. For 'memory leaks', it may feel that way looking through the console tool (but in reality the console metric may be the sum of historical memory requests and is not an accurate value), but the actual physical memory has always been around 50M. If you have any concerns and any exceptions you can always open issues and I will see that they will be addressed and dealt with in a timely manner.Here is my local run, 1000 tasks running for 4 hours, the actual memory consumption has been cycling between 50M - 80M. |
I monit process with script #please run this monit script in tester dir
pid=`ps -ef|grep dt_test|grep -v grep|awk '{print $2}'`
#clean last test result
>mem.log
while true;do t=`date "+%Y-%m-%d %H:%M:%S"`&&m=`ps o rss -p $pid|tail -1`;echo "$t | $m";sleep 5;done>>mem.log in 5.8 hours running,memory usage like this
my test project is at |
Thank you for the monitoring script, it looks very subtle. May I ask, did you compile the program using If it is convenient, please use ps: ( |
yes,the test result I provided did run use release mode: |
I have a suggestion for memory optimization memory.
For example, if you change the 10 in the code ( This is because the You can try what I suggested to you first, it will pay off a lot. |
thx,I will try write my application with the crate,at the same time,I do not use too many jobs,and cut down frequence,I think it wil be well,and I will test as soon it updated;1000 is a ideal target,but every second is a harsh condition,if this is done,that will be a great works. |
Thank you, I'll keep an eye on your experience and plan changes and iterations of the project! |
Describe the bug
I config 100 jobs in a configuration file
conf/config.toml
,and init them in a loop,sometimes,it is remaining some jobs can not be init.output like:then,it do nothing for a long time,some remainding jobs not appeared any more.I try tokio, there is the same problem.
The text was updated successfully, but these errors were encountered: