diff --git a/doc/developer-guide/api/functions/TSContScheduleEveryOnEntirePool.en.rst b/doc/developer-guide/api/functions/TSContScheduleEveryOnEntirePool.en.rst new file mode 100644 index 00000000000..9b7e076b3d5 --- /dev/null +++ b/doc/developer-guide/api/functions/TSContScheduleEveryOnEntirePool.en.rst @@ -0,0 +1,59 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed + with this work for additional information regarding copyright + ownership. The ASF licenses this file to you under the Apache + License, Version 2.0 (the "License"); you may not use this file + except in compliance with the License. You may obtain a copy of + the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied. See the License for the specific language governing + permissions and limitations under the License. + +.. include:: ../../../common.defs + +.. default-domain:: c + +TSContScheduleEveryOnEntirePool +******************************* + +Synopsis +======== + +.. code-block:: cpp + + #include + +.. function:: std::vector TSContScheduleEveryOnEntirePool(TSCont contp, TSHRTime every, TSThreadPool tp) + +Description +=========== + +Schedules :arg:`contp` to run :arg:`every` milliseconds, on all threads that +belongs to :arg:`tp`. The :arg:`every` is an approximation, meaning it will be at least +:arg:`every` milliseconds but possibly more. Resolutions finer than roughly 5 milliseconds will +not be effective. Note that :arg:`contp` is required to NOT have a mutex, since the continuation +is scheduled on multiple threads. This means the continuation must handle synchronization itself. + +The return value can be used to cancel the scheduled event via :func:`TSActionCancel`. This +is effective until the continuation :arg:`contp` is being dispatched. However, if it is +scheduled on another thread this can be problematic to be correctly timed. The return value +can be checked with :func:`TSActionDone` to see if the continuation ran before the return. + +.. note:: Due to scheduling multiple events, the return value is changed to :type:`std::vector`, as compared to :type:`TSAction` of the other `TSContSchedule` APIs. + +Note that the `TSContSchedule` family of API shall only be called from an ATS EThread. +Calling it from raw non-EThreads can result in unpredictable behavior. + +See Also +======== + +:doc:`TSContScheduleOnPool.en` +:doc:`TSContScheduleOnThread.en` +:doc:`TSContScheduleOnEntirePool.en` +:doc:`TSContScheduleEveryOnPool.en` +:doc:`TSContScheduleEveryOnThread.en` diff --git a/doc/developer-guide/api/functions/TSContScheduleEveryOnPool.en.rst b/doc/developer-guide/api/functions/TSContScheduleEveryOnPool.en.rst index 01163955081..7f79378c15a 100644 --- a/doc/developer-guide/api/functions/TSContScheduleEveryOnPool.en.rst +++ b/doc/developer-guide/api/functions/TSContScheduleEveryOnPool.en.rst @@ -28,7 +28,7 @@ Synopsis #include -.. function:: TSAction TSContScheduleEveryOnPool(TSCont contp, TSHRTime every) +.. function:: TSAction TSContScheduleEveryOnPool(TSCont contp, TSHRTime every, TSThreadPool tp) Description =========== @@ -42,13 +42,12 @@ effective. Note that :arg:`contp` is required to have a mutex, which is provided The return value can be used to cancel the scheduled event via :func:`TSActionCancel`. This is effective until the continuation :arg:`contp` is being dispatched. However, if it is scheduled on another thread this can be problematic to be correctly timed. The return value -can be checked with :func:`TSActionDone` to see if the continuation ran before the return, -which is possible if :arg:`timeout` is `0`. +can be checked with :func:`TSActionDone` to see if the continuation ran before the return. If :arg:`contp` has no thread affinity set, the thread it is now scheduled on will be set as its thread affinity thread. -Note that the TSContSchedule() family of API shall only be called from an ATS EThread. +Note that the `TSContSchedule` family of API shall only be called from an ATS EThread. Calling it from raw non-EThreads can result in unpredictable behavior. See Also @@ -56,3 +55,6 @@ See Also :doc:`TSContScheduleOnPool.en` :doc:`TSContScheduleOnThread.en` +:doc:`TSContScheduleOnEntirePool.en` +:doc:`TSContScheduleEveryOnThread.en` +:doc:`TSContScheduleEveryOnEntirePool.en` diff --git a/doc/developer-guide/api/functions/TSContScheduleEveryOnThread.en.rst b/doc/developer-guide/api/functions/TSContScheduleEveryOnThread.en.rst new file mode 100644 index 00000000000..f94b1330e12 --- /dev/null +++ b/doc/developer-guide/api/functions/TSContScheduleEveryOnThread.en.rst @@ -0,0 +1,60 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed + with this work for additional information regarding copyright + ownership. The ASF licenses this file to you under the Apache + License, Version 2.0 (the "License"); you may not use this file + except in compliance with the License. You may obtain a copy of + the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied. See the License for the specific language governing + permissions and limitations under the License. + +.. include:: ../../../common.defs + +.. default-domain:: c + +TSContScheduleEveryOnThread +*************************** + +Synopsis +======== + +.. code-block:: cpp + + #include + +.. function:: TSAction TSContScheduleEveryOnThread(TSCont contp, TSHRTime every, TSEventThread ethread) + +Description +=========== + +Schedules :arg:`contp` to run :arg:`every` milliseconds, on the thread specified by +:arg:`ethread`. The :arg:`every` is an approximation, meaning it will be at least :arg:`every` +milliseconds but possibly more. Resolutions finer than roughly 5 milliseconds will not be +effective. Note that :arg:`contp` is required to have a mutex, which is provided to +:func:`TSContCreate`. + +The return value can be used to cancel the scheduled event via :func:`TSActionCancel`. This +is effective until the continuation :arg:`contp` is being dispatched. However, if it is +scheduled on another thread this can be problematic to be correctly timed. The return value +can be checked with :func:`TSActionDone` to see if the continuation ran before the return. + +If :arg:`contp` has no thread affinity set, the thread it is now scheduled on will be set +as its thread affinity thread. + +Note that the `TSContSchedule` family of API shall only be called from an ATS EThread. +Calling it from raw non-EThreads can result in unpredictable behavior. + +See Also +======== + +:doc:`TSContScheduleOnPool.en` +:doc:`TSContScheduleOnThread.en` +:doc:`TSContScheduleOnEntirePool.en` +:doc:`TSContScheduleEveryOnPool.en` +:doc:`TSContScheduleEveryOnEntirePool.en` diff --git a/doc/developer-guide/api/functions/TSContScheduleOnEntirePool.en.rst b/doc/developer-guide/api/functions/TSContScheduleOnEntirePool.en.rst new file mode 100644 index 00000000000..ce77ad98fc1 --- /dev/null +++ b/doc/developer-guide/api/functions/TSContScheduleOnEntirePool.en.rst @@ -0,0 +1,60 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed + with this work for additional information regarding copyright + ownership. The ASF licenses this file to you under the Apache + License, Version 2.0 (the "License"); you may not use this file + except in compliance with the License. You may obtain a copy of + the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied. See the License for the specific language governing + permissions and limitations under the License. + +.. include:: ../../../common.defs + +.. default-domain:: c + +TSContScheduleOnEntirePool +************************** + +Synopsis +======== + +.. code-block:: cpp + + #include + +.. function:: std::vector TSContScheduleOnEntirePool(TSCont contp, TSHRTime timeout, TSThreadPool tp) + +Description +=========== + +Schedules :arg:`contp` to run :arg:`timeout` milliseconds in the future, on all threads that +belongs to :arg:`tp`. The :arg:`timeout` is an approximation, meaning it will be at least +:arg:`timeout` milliseconds but possibly more. Resolutions finer than roughly 5 milliseconds will +not be effective. Note that :arg:`contp` is required to NOT have a mutex, since the continuation +is scheduled on multiple threads. This means the continuation must handle synchronization itself. + +The return value can be used to cancel the scheduled event via :func:`TSActionCancel`. This +is effective until the continuation :arg:`contp` is being dispatched. However, if it is +scheduled on another thread this can be problematic to be correctly timed. The return value +can be checked with :func:`TSActionDone` to see if the continuation ran before the return, +which is possible if :arg:`timeout` is `0`. + +.. note:: Due to scheduling multiple events, the return value is changed to :type:`std::vector`, as compared to :type:`TSAction` of the other `TSContSchedule` APIs. + +Note that the `TSContSchedule` family of API shall only be called from an ATS EThread. +Calling it from raw non-EThreads can result in unpredictable behavior. + +See Also +======== + +:doc:`TSContScheduleOnPool.en` +:doc:`TSContScheduleOnThread.en` +:doc:`TSContScheduleEveryOnPool.en` +:doc:`TSContScheduleEveryOnThread.en` +:doc:`TSContScheduleEveryOnEntirePool.en` diff --git a/doc/developer-guide/api/functions/TSContScheduleOnPool.en.rst b/doc/developer-guide/api/functions/TSContScheduleOnPool.en.rst index e044e4f26fa..1301e2a9891 100644 --- a/doc/developer-guide/api/functions/TSContScheduleOnPool.en.rst +++ b/doc/developer-guide/api/functions/TSContScheduleOnPool.en.rst @@ -68,13 +68,13 @@ another thread this can be problematic to be correctly timed. The return value c If :arg:`contp` has no thread affinity set, the thread it is now scheduled on will be set as its thread affinity thread. -Note that the TSContSchedule() family of API shall only be called from an ATS EThread. +Note that the `TSContSchedule` family of API shall only be called from an ATS EThread. Calling it from raw non-EThreads can result in unpredictable behavior. -Note that some times if the TASK threads are not ready when you schedule a "contp" on the ``TS_THREAD_POOL_TASK`` then -the "contp" could end up being executed in a ``ET_NET`` thread instead, more likely this is not what you want. +Note that some times if the TASK threads are not ready when you schedule a :arg:`contp` on the ``TS_THREAD_POOL_TASK`` then +the :arg:`contp` could end up being executed in a ``ET_NET`` thread instead, more likely this is not what you want. To avoid this you can use the ``TS_LIFECYCLE_TASK_THREADS_READY_HOOK`` to make sure that the task threads -have been started when you schedule a "contp". You can refer to :func:`TSLifecycleHookAdd` for more details. +have been started when you schedule a :arg:`contp`. You can refer to :func:`TSLifecycleHookAdd` for more details. Example Scenarios ================= @@ -82,14 +82,14 @@ Example Scenarios Scenario 1 (no thread affinity info, different types of threads) ---------------------------------------------------------------- -When thread affinity is not set, a plugin calls the API on thread "A" (which is an "ET_TASK" type), and -wants to schedule on an "ET_NET" type thread provided in "tp", the system would see there is no thread +When thread affinity is not set, a plugin calls the API on thread "A" (which is an ``ET_TASK`` type), and +wants to schedule on an ``ET_NET`` type thread provided in :arg:`tp`, the system would see there is no thread affinity information stored in "contp." -In this situation, system sees there is no thread affinity information stored in "contp". It then -checks whether the type of thread "A" is the same as provided in "tp", and sees that "A" is "ET_TASK", -but "tp" says "ET_NET". So "contp" gets scheduled on the next available "ET_NET" thread provided by a -round robin list, which we will call thread "B". Since "contp" doesn't have thread affinity information, +In this situation, system sees there is no thread affinity information stored in :arg:`contp`. It then +checks whether the type of thread "A" is the same as provided in :arg:`tp`, and sees that "A" is ``ET_TASK``, +but :arg:`tp` says ``ET_NET``. So :arg:`contp` gets scheduled on the next available ``ET_NET`` thread provided by a +round robin list, which we will call thread "B". Since :arg:`contp` doesn't have thread affinity information, thread "B" will be assigned as the affinity thread for it automatically. The reason for doing this is most of the time people want to schedule the same things on the same type @@ -99,11 +99,11 @@ thread. Scenario 2 (no thread affinity info, same types of threads) ----------------------------------------------------------- -Slight variation of scenario 1, instead of scheduling on a "ET_NET" thread, the plugin wants to schedule -on a "ET_TASK" thread (i.e. "tp" contains "ET_TASK" now), all other conditions stays the same. +Slight variation of scenario 1, instead of scheduling on a ``ET_NET`` thread, the plugin wants to schedule +on a ``ET_TASK`` thread (i.e. :arg:`tp` contains ``ET_TASK`` now), all other conditions stays the same. This time since the type of the desired thread for scheduling and thread "A" are the same, the system -schedules "contp" on thread "A", and assigns thread "A" as the affinity thread for "contp". +schedules :arg:`contp` on thread "A", and assigns thread "A" as the affinity thread for :arg:`contp`. The reason behind this choice is that we are trying to keep things simple such that lock contention problems happens less. And for the most part, there is no point of scheduling the same thing on several @@ -114,14 +114,14 @@ serialized since its on the same continuation). Scenario 3 (has thread affinity info, different types of threads) ----------------------------------------------------------------- -Slight variation of scenario 1, thread affinity is set for continuation "contp" to thread "A", all other +Slight variation of scenario 1, thread affinity is set for continuation :arg:`contp` to thread "A", all other conditions stays the same. -In this situation, the system sees that the "tp" has "ET_NET", but the type of thread "A" is "ET_TASK". -So even though "contp" has an affinity thread, the system will not use that information since the type is -different, instead it schedules "contp" on the next available "ET_NET" thread provided by a round robin +In this situation, the system sees that the :arg:`tp` has ``ET_NET``, but the type of thread "A" is ``ET_TASK``. +So even though :arg:`contp` has an affinity thread, the system will not use that information since the type is +different, instead it schedules :arg:`contp` on the next available ``ET_NET`` thread provided by a round robin list, which we will call thread "B". The difference with scenario 1 is that since thread "A" is set to -be the affinity thread for "contp" already, the system will NOT overwrite that information with thread "B". +be the affinity thread for :arg:`contp` already, the system will NOT overwrite that information with thread "B". Most of the time, a continuation will be scheduled on one type of threads, and rarely gets scheduled on a different type. But when that happens, we want it to return to the thread it was previously on, so it @@ -131,10 +131,10 @@ types and thread pointers. Scenario 4 (has thread affinity info, same types of threads) ------------------------------------------------------------ -Slight variation of scenario 3, the only difference is "tp" now says "ET_TASK". +Slight variation of scenario 3, the only difference is :arg:`tp` now says ``ET_TASK``. -This is the easiest scenario since the type of thread "A" and "tp" are the same, so the system schedules -"contp" on thread "A". And, as discussed, there is really no reason why one may want to schedule +This is the easiest scenario since the type of thread "A" and :arg:`tp` are the same, so the system schedules +:arg:`contp` on thread "A". And, as discussed, there is really no reason why one may want to schedule the same continuation on two different threads of the same type. .. note:: In scenario 3 & 4, it doesn't matter which thread the plugin is calling the API from. @@ -143,6 +143,9 @@ the same continuation on two different threads of the same type. See Also ======== -:doc:`TSContScheduleEveryOnPool.en` :doc:`TSContScheduleOnThread.en` +:doc:`TSContScheduleOnEntirePool.en` +:doc:`TSContScheduleEveryOnPool.en` +:doc:`TSContScheduleEveryOnThread.en` +:doc:`TSContScheduleEveryOnEntirePool.en` :doc:`TSLifecycleHookAdd.en` diff --git a/doc/developer-guide/api/functions/TSContScheduleOnThread.en.rst b/doc/developer-guide/api/functions/TSContScheduleOnThread.en.rst index 41148a9d3ad..77152f9f8ba 100644 --- a/doc/developer-guide/api/functions/TSContScheduleOnThread.en.rst +++ b/doc/developer-guide/api/functions/TSContScheduleOnThread.en.rst @@ -47,11 +47,14 @@ another thread this can be problematic to be correctly timed. The return value c If :arg:`contp` has no thread affinity set, the thread it is now scheduled on will be set as its thread affinity thread. -Note that the TSContSchedule() family of API shall only be called from an ATS EThread. +Note that the `TSContSchedule` family of API shall only be called from an ATS EThread. Calling it from raw non-EThreads can result in unpredictable behavior. See Also ======== :doc:`TSContScheduleOnPool.en` +:doc:`TSContScheduleOnEntirePool.en` :doc:`TSContScheduleEveryOnPool.en` +:doc:`TSContScheduleEveryOnThread.en` +:doc:`TSContScheduleEveryOnEntirePool.en` diff --git a/include/iocore/eventsystem/EventProcessor.h b/include/iocore/eventsystem/EventProcessor.h index 1d5ff07cf4f..3846ac49b28 100644 --- a/include/iocore/eventsystem/EventProcessor.h +++ b/include/iocore/eventsystem/EventProcessor.h @@ -219,6 +219,9 @@ class EventProcessor : public Processor Event *schedule_every(Continuation *c, ink_hrtime aperiod, EventType event_type = ET_CALL, int callback_event = EVENT_INTERVAL, void *cookie = nullptr); + std::vector schedule_entire(Continuation *c, ink_hrtime atimeout, ink_hrtime aperiod, EventType event_type = ET_CALL, + int callback_event = EVENT_IMMEDIATE, void *cookie = nullptr); + //////////////////////////////////////////// // reschedule an already scheduled event. // // may be called directly or called by // diff --git a/include/ts/ts.h b/include/ts/ts.h index b244610a8ba..6b7ebcc8746 100644 --- a/include/ts/ts.h +++ b/include/ts/ts.h @@ -34,6 +34,7 @@ #endif #include +#include #include "tsutil/DbgCtl.h" #include "ts/apidefs.h" @@ -1283,8 +1284,10 @@ void TSContDataSet(TSCont contp, void *data); void *TSContDataGet(TSCont contp); TSAction TSContScheduleOnPool(TSCont contp, TSHRTime timeout, TSThreadPool tp); TSAction TSContScheduleOnThread(TSCont contp, TSHRTime timeout, TSEventThread ethread); +std::vector TSContScheduleOnEntirePool(TSCont contp, TSHRTime timeout, TSThreadPool tp); TSAction TSContScheduleEveryOnPool(TSCont contp, TSHRTime every /* millisecs */, TSThreadPool tp); TSAction TSContScheduleEveryOnThread(TSCont contp, TSHRTime every /* millisecs */, TSEventThread ethread); +std::vector TSContScheduleEveryOnEntirePool(TSCont contp, TSHRTime every /* millisecs */, TSThreadPool tp); TSReturnCode TSContThreadAffinitySet(TSCont contp, TSEventThread ethread); TSEventThread TSContThreadAffinityGet(TSCont contp); void TSContThreadAffinityClear(TSCont contp); diff --git a/src/api/InkAPI.cc b/src/api/InkAPI.cc index 97717fc5ed1..91cba8252f9 100644 --- a/src/api/InkAPI.cc +++ b/src/api/InkAPI.cc @@ -3444,6 +3444,46 @@ TSContScheduleOnThread(TSCont contp, TSHRTime timeout, TSEventThread ethread) return action; } +std::vector +TSContScheduleOnEntirePool(TSCont contp, TSHRTime timeout, TSThreadPool tp) +{ + sdk_assert(sdk_sanity_check_iocore_structure(contp) == TS_SUCCESS); + + /* ensure we are on a EThread */ + sdk_assert(sdk_sanity_check_null_ptr((void *)this_ethread()) == TS_SUCCESS); + + INKContInternal *i = reinterpret_cast(contp); + + // This is to allow the continuation to be scheduled on multiple threads + sdk_assert(i->mutex == nullptr); + + EventType etype; + + switch (tp) { + case TS_THREAD_POOL_NET: + etype = ET_NET; + break; + case TS_THREAD_POOL_TASK: + etype = ET_TASK; + break; + case TS_THREAD_POOL_DNS: + etype = ET_DNS; + break; + case TS_THREAD_POOL_UDP: + etype = ET_UDP; + break; + default: + etype = ET_TASK; + break; + } + + if (ink_atomic_increment(static_cast(&i->m_event_count), eventProcessor.thread_group[etype]._count) < 0) { + ink_assert(!"not reached"); + } + + return eventProcessor.schedule_entire(i, HRTIME_MSECONDS(timeout), 0, etype, timeout == 0 ? EVENT_IMMEDIATE : EVENT_INTERVAL); +} + TSAction TSContScheduleEveryOnPool(TSCont contp, TSHRTime every, TSThreadPool tp) { @@ -3469,6 +3509,12 @@ TSContScheduleEveryOnPool(TSCont contp, TSHRTime every, TSThreadPool tp) case TS_THREAD_POOL_TASK: etype = ET_TASK; break; + case TS_THREAD_POOL_DNS: + etype = ET_DNS; + break; + case TS_THREAD_POOL_UDP: + etype = ET_UDP; + break; default: etype = ET_TASK; break; @@ -3482,7 +3528,7 @@ TSContScheduleEveryOnPool(TSCont contp, TSHRTime every, TSThreadPool tp) } TSAction -TSContScheduleEveryOnThread(TSCont contp, TSHRTime every /* millisecs */, TSEventThread ethread) +TSContScheduleEveryOnThread(TSCont contp, TSHRTime every, TSEventThread ethread) { ink_release_assert(ethread != nullptr); @@ -3508,6 +3554,48 @@ TSContScheduleEveryOnThread(TSCont contp, TSHRTime every /* millisecs */, TSEven return action; } +std::vector +TSContScheduleEveryOnEntirePool(TSCont contp, TSHRTime every, TSThreadPool tp) +{ + sdk_assert(sdk_sanity_check_iocore_structure(contp) == TS_SUCCESS); + + /* ensure we are on a EThread */ + sdk_assert(sdk_sanity_check_null_ptr((void *)this_ethread()) == TS_SUCCESS); + + sdk_assert(every != 0); + + INKContInternal *i = reinterpret_cast(contp); + + // This is to allow the continuation to be scheduled on multiple threads + sdk_assert(i->mutex == nullptr); + + EventType etype; + + switch (tp) { + case TS_THREAD_POOL_NET: + etype = ET_NET; + break; + case TS_THREAD_POOL_TASK: + etype = ET_TASK; + break; + case TS_THREAD_POOL_DNS: + etype = ET_DNS; + break; + case TS_THREAD_POOL_UDP: + etype = ET_UDP; + break; + default: + etype = ET_TASK; + break; + } + + if (ink_atomic_increment(static_cast(&i->m_event_count), eventProcessor.thread_group[etype]._count) < 0) { + ink_assert(!"not reached"); + } + + return eventProcessor.schedule_entire(i, 0, HRTIME_MSECONDS(every), etype, EVENT_INTERVAL); +} + TSReturnCode TSContThreadAffinitySet(TSCont contp, TSEventThread ethread) { diff --git a/src/iocore/eventsystem/P_UnixEventProcessor.h b/src/iocore/eventsystem/P_UnixEventProcessor.h index be194353a7a..4730fe8cd41 100644 --- a/src/iocore/eventsystem/P_UnixEventProcessor.h +++ b/src/iocore/eventsystem/P_UnixEventProcessor.h @@ -23,6 +23,8 @@ #pragma once +#include + #include #include "tscore/ink_align.h" @@ -198,3 +200,47 @@ EventProcessor::schedule_every(Continuation *cont, ink_hrtime t, EventType et, i return schedule(e->init(cont, ink_get_hrtime() + t, t), et); } } + +TS_INLINE std::vector +EventProcessor::schedule_entire(Continuation *cont, ink_hrtime t, ink_hrtime p, EventType et, int callback_event, void *cookie) +{ + ThreadGroupDescriptor *tg = &thread_group[et]; + EThread *curr_thread = this_ethread(); + + std::vector actions; + + for (int i = 0; i < tg->_count; i++) { + Event *e = eventAllocator.alloc(); + + e->ethread = tg->_thread[i]; + e->callback_event = callback_event; + e->cookie = cookie; + + if (t == 0 && p == 0) { + e->init(cont, 0, 0); + } else if (t != 0 && p == 0) { + e->init(cont, ink_get_hrtime() + t, 0); + } else if (t == 0 && p != 0) { + if (p < 0) { + e->init(cont, p, p); + } else { + e->init(cont, ink_get_hrtime() + p, p); + } + } else { + ink_assert(!"not reached"); + } + + e->mutex = new_ProxyMutex(); + + if (curr_thread != nullptr && e->ethread == curr_thread) { + e->ethread->EventQueueExternal.enqueue_local(e); + } else { + e->ethread->EventQueueExternal.enqueue(e); + } + + /* This is a hack. Should be handled in ink_types */ + actions.push_back((TSAction)((uintptr_t) reinterpret_cast(e) | 0x1)); + } + + return actions; +} diff --git a/tests/gold_tests/cont_schedule/entire_pool.py b/tests/gold_tests/cont_schedule/entire_pool.py new file mode 100755 index 00000000000..2626960ab5e --- /dev/null +++ b/tests/gold_tests/cont_schedule/entire_pool.py @@ -0,0 +1,57 @@ +#!/usr/bin/env python3 + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +''' +A simple tool to check continuation is scheduled on an entire thread pool. +''' + +import sys +import re + + +def main(): + log_path = sys.argv[1] + thread_type = sys.argv[2] + thread_num = int(sys.argv[3]) + min_count = int(sys.argv[4]) + thread_check = {} + with open(log_path, 'r') as f: + for line in f: + match = re.search(rf'\[({thread_type}) (\d+)\]', line) + if not match: + continue + thread_name = f'{match.group(1)} {match.group(2):0>2}' + if thread_name not in thread_check: + thread_check[thread_name] = 0 + thread_check[thread_name] += 1 + + for thread_name in sorted(thread_check): + if thread_check[thread_name] < min_count: + print(f'{thread_name}: {thread_check[thread_name]} (fail)') + else: + print(f'{thread_name}: {thread_check[thread_name]} (pass)') + + if len(thread_check) != thread_num: + print(f'total threads: {len(thread_check)} (fail)') + else: + print(f'total threads: {len(thread_check)} (pass)') + + exit(0) + + +if __name__ == '__main__': + main() diff --git a/tests/gold_tests/cont_schedule/gold/schedule_every_on_entire_pool.gold b/tests/gold_tests/cont_schedule/gold/schedule_every_on_entire_pool.gold new file mode 100644 index 00000000000..dd5709a7fba --- /dev/null +++ b/tests/gold_tests/cont_schedule/gold/schedule_every_on_entire_pool.gold @@ -0,0 +1,33 @@ +ET_NET 00: `` (pass) +ET_NET 01: `` (pass) +ET_NET 02: `` (pass) +ET_NET 03: `` (pass) +ET_NET 04: `` (pass) +ET_NET 05: `` (pass) +ET_NET 06: `` (pass) +ET_NET 07: `` (pass) +ET_NET 08: `` (pass) +ET_NET 09: `` (pass) +ET_NET 10: `` (pass) +ET_NET 11: `` (pass) +ET_NET 12: `` (pass) +ET_NET 13: `` (pass) +ET_NET 14: `` (pass) +ET_NET 15: `` (pass) +ET_NET 16: `` (pass) +ET_NET 17: `` (pass) +ET_NET 18: `` (pass) +ET_NET 19: `` (pass) +ET_NET 20: `` (pass) +ET_NET 21: `` (pass) +ET_NET 22: `` (pass) +ET_NET 23: `` (pass) +ET_NET 24: `` (pass) +ET_NET 25: `` (pass) +ET_NET 26: `` (pass) +ET_NET 27: `` (pass) +ET_NET 28: `` (pass) +ET_NET 29: `` (pass) +ET_NET 30: `` (pass) +ET_NET 31: `` (pass) +total threads: 32 (pass) diff --git a/tests/gold_tests/cont_schedule/gold/schedule_every_on_pool.gold b/tests/gold_tests/cont_schedule/gold/schedule_every_on_pool.gold new file mode 100644 index 00000000000..d99fee038f9 --- /dev/null +++ b/tests/gold_tests/cont_schedule/gold/schedule_every_on_pool.gold @@ -0,0 +1,7 @@ +`` +``ET_NET``TSContScheduleEveryOnPool handler`` +``ET_NET``TSContScheduleEveryOnPool handler`` +``(TSContSchedule_test.check) pass [should be the same thread] +``ET_NET``TSContScheduleEveryOnPool handler`` +``(TSContSchedule_test.check) pass [should be the same thread] +`` diff --git a/tests/gold_tests/cont_schedule/gold/schedule_every_on_thread.gold b/tests/gold_tests/cont_schedule/gold/schedule_every_on_thread.gold new file mode 100644 index 00000000000..074220b1711 --- /dev/null +++ b/tests/gold_tests/cont_schedule/gold/schedule_every_on_thread.gold @@ -0,0 +1,8 @@ +`` +``ET_TASK``TSContScheduleEveryOnThread handler`` +``(TSContSchedule_test.check) pass [should be the same thread] +``ET_TASK``TSContScheduleEveryOnThread handler`` +``(TSContSchedule_test.check) pass [should be the same thread] +``ET_TASK``TSContScheduleEveryOnThread handler`` +``(TSContSchedule_test.check) pass [should be the same thread] +`` diff --git a/tests/gold_tests/cont_schedule/gold/schedule_on_entire_pool.gold b/tests/gold_tests/cont_schedule/gold/schedule_on_entire_pool.gold new file mode 100644 index 00000000000..ea8b206ea8b --- /dev/null +++ b/tests/gold_tests/cont_schedule/gold/schedule_on_entire_pool.gold @@ -0,0 +1,33 @@ +ET_NET 00: 1 (pass) +ET_NET 01: 1 (pass) +ET_NET 02: 1 (pass) +ET_NET 03: 1 (pass) +ET_NET 04: 1 (pass) +ET_NET 05: 1 (pass) +ET_NET 06: 1 (pass) +ET_NET 07: 1 (pass) +ET_NET 08: 1 (pass) +ET_NET 09: 1 (pass) +ET_NET 10: 1 (pass) +ET_NET 11: 1 (pass) +ET_NET 12: 1 (pass) +ET_NET 13: 1 (pass) +ET_NET 14: 1 (pass) +ET_NET 15: 1 (pass) +ET_NET 16: 1 (pass) +ET_NET 17: 1 (pass) +ET_NET 18: 1 (pass) +ET_NET 19: 1 (pass) +ET_NET 20: 1 (pass) +ET_NET 21: 1 (pass) +ET_NET 22: 1 (pass) +ET_NET 23: 1 (pass) +ET_NET 24: 1 (pass) +ET_NET 25: 1 (pass) +ET_NET 26: 1 (pass) +ET_NET 27: 1 (pass) +ET_NET 28: 1 (pass) +ET_NET 29: 1 (pass) +ET_NET 30: 1 (pass) +ET_NET 31: 1 (pass) +total threads: 32 (pass) diff --git a/tests/gold_tests/cont_schedule/schedule_every_on_entire_pool.test.py b/tests/gold_tests/cont_schedule/schedule_every_on_entire_pool.test.py new file mode 100644 index 00000000000..a63a3f3817c --- /dev/null +++ b/tests/gold_tests/cont_schedule/schedule_every_on_entire_pool.test.py @@ -0,0 +1,64 @@ +''' +''' +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys + +Test.Summary = 'Test TSContScheduleEveryOnEntirePool API' +Test.ContinueOnFail = True + +# Define default ATS +ts = Test.MakeATSProcess('ts') + +Test.testName = 'Test TSContScheduleEveryOnEntirePool API' + +ts.Disk.records_config.update( + { + 'proxy.config.exec_thread.autoconfig.enabled': 0, + 'proxy.config.exec_thread.autoconfig.scale': 1.5, + 'proxy.config.exec_thread.limit': 32, + 'proxy.config.accept_threads': 1, + 'proxy.config.task_threads': 2, + 'proxy.config.diags.debug.enabled': 1, + 'proxy.config.diags.debug.tags': 'TSContSchedule_test' + }) + +ts.Setup.Copy('entire_pool.py') + +# Load plugin +Test.PrepareTestPlugin(os.path.join(Test.Variables.AtsTestPluginsDir, 'cont_schedule.so'), ts, 'every_entire') + +# www.example.com Host +tr = Test.AddTestRun() +tr.Processes.Default.Command = 'printf "Test TSContScheduleEveryOnEntirePool API"' +tr.Processes.Default.ReturnCode = 0 +tr.Processes.Default.StartBefore(ts) + +tr = Test.AddTestRun("Wait traffic.out to be written") +timeout = 30 +watcher = tr.Processes.Process("watcher") +watcher.Command = f"sleep {timeout}" +watcher.Ready = When.FileExists(ts.Disk.traffic_out.AbsPath) +watcher.TimeOut = timeout +tr.TimeOut = timeout +tr.DelayStart = 3 +tr.Processes.Default.StartBefore(watcher) +tr.Processes.Default.Command = f'{sys.executable} entire_pool.py {ts.Disk.traffic_out.AbsPath} ET_NET 32 2' +tr.Processes.Default.ReturnCode = 0 +tr.Processes.Default.Streams.All = "gold/schedule_every_on_entire_pool.gold" +tr.Processes.Default.Streams.All += Testers.ExcludesExpression('fail', 'should not contain "fail"') diff --git a/tests/gold_tests/cont_schedule/schedule_every_on_pool.test.py b/tests/gold_tests/cont_schedule/schedule_every_on_pool.test.py new file mode 100644 index 00000000000..d0db9097339 --- /dev/null +++ b/tests/gold_tests/cont_schedule/schedule_every_on_pool.test.py @@ -0,0 +1,62 @@ +''' +''' +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +Test.Summary = 'Test TSContScheduleEveryOnPool API' +Test.ContinueOnFail = True + +# Define default ATS +ts = Test.MakeATSProcess('ts') + +Test.testName = 'Test TSContScheduleEveryOnPool API' + +ts.Disk.records_config.update( + { + 'proxy.config.exec_thread.autoconfig.enabled': 0, + 'proxy.config.exec_thread.autoconfig.scale': 1.5, + 'proxy.config.exec_thread.limit': 32, + 'proxy.config.accept_threads': 1, + 'proxy.config.task_threads': 2, + 'proxy.config.diags.debug.enabled': 1, + 'proxy.config.diags.debug.tags': 'TSContSchedule_test' + }) + +# Load plugin +Test.PrepareTestPlugin(os.path.join(Test.Variables.AtsTestPluginsDir, 'cont_schedule.so'), ts, 'every_pool') + +# www.example.com Host +tr = Test.AddTestRun() +tr.Processes.Default.Command = 'printf "Test TSContScheduleEveryOnPool API"' +tr.Processes.Default.ReturnCode = 0 +tr.Processes.Default.StartBefore(ts) + +tr = Test.AddTestRun("Wait traffic.out to be written") +timeout = 30 +watcher = tr.Processes.Process("watcher") +watcher.Command = f"sleep {timeout}" +watcher.Ready = When.FileExists(ts.Disk.traffic_out.AbsPath) +watcher.TimeOut = timeout +tr.TimeOut = timeout +tr.DelayStart = 3 +tr.Processes.Default.StartBefore(watcher) +tr.Processes.Default.Command = 'printf "await traffic.out"' +tr.Processes.Default.ReturnCode = 0 +ts.Disk.traffic_out.Content = "gold/schedule_every_on_pool.gold" +ts.Disk.traffic_out.Content += Testers.IncludesExpression('pass', 'must contain "pass"') +ts.Disk.traffic_out.Content += Testers.ExcludesExpression('fail', 'should not contain "fail"') diff --git a/tests/gold_tests/cont_schedule/schedule_every_on_thread.test.py b/tests/gold_tests/cont_schedule/schedule_every_on_thread.test.py new file mode 100644 index 00000000000..659a0710735 --- /dev/null +++ b/tests/gold_tests/cont_schedule/schedule_every_on_thread.test.py @@ -0,0 +1,62 @@ +''' +''' +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +Test.Summary = 'Test TSContScheduleEveryOnThread API' +Test.ContinueOnFail = True + +# Define default ATS +ts = Test.MakeATSProcess('ts') + +Test.testName = 'Test TSContScheduleEveryOnThread API' + +ts.Disk.records_config.update( + { + 'proxy.config.exec_thread.autoconfig.enabled': 0, + 'proxy.config.exec_thread.autoconfig.scale': 1.5, + 'proxy.config.exec_thread.limit': 32, + 'proxy.config.accept_threads': 1, + 'proxy.config.task_threads': 2, + 'proxy.config.diags.debug.enabled': 1, + 'proxy.config.diags.debug.tags': 'TSContSchedule_test' + }) + +# Load plugin +Test.PrepareTestPlugin(os.path.join(Test.Variables.AtsTestPluginsDir, 'cont_schedule.so'), ts, 'every_thread') + +# www.example.com Host +tr = Test.AddTestRun() +tr.Processes.Default.Command = 'printf "Test TSContScheduleEveryOnThread API"' +tr.Processes.Default.ReturnCode = 0 +tr.Processes.Default.StartBefore(ts) + +tr = Test.AddTestRun("Wait traffic.out to be written") +timeout = 30 +watcher = tr.Processes.Process("watcher") +watcher.Command = f"sleep {timeout}" +watcher.Ready = When.FileExists(ts.Disk.traffic_out.AbsPath) +watcher.TimeOut = timeout +tr.TimeOut = timeout +tr.DelayStart = 3 +tr.Processes.Default.StartBefore(watcher) +tr.Processes.Default.Command = 'printf "await traffic.out"' +tr.Processes.Default.ReturnCode = 0 +ts.Disk.traffic_out.Content = "gold/schedule_every_on_thread.gold" +ts.Disk.traffic_out.Content += Testers.IncludesExpression('pass', 'must contain "pass"') +ts.Disk.traffic_out.Content += Testers.ExcludesExpression('fail', 'should not contain "fail"') diff --git a/tests/gold_tests/cont_schedule/schedule_on_entire_pool.test.py b/tests/gold_tests/cont_schedule/schedule_on_entire_pool.test.py new file mode 100644 index 00000000000..c199db600d6 --- /dev/null +++ b/tests/gold_tests/cont_schedule/schedule_on_entire_pool.test.py @@ -0,0 +1,64 @@ +''' +''' +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys + +Test.Summary = 'Test TSContScheduleOnEntirePool API' +Test.ContinueOnFail = True + +# Define default ATS +ts = Test.MakeATSProcess('ts') + +Test.testName = 'Test TSContScheduleOnEntirePool API' + +ts.Disk.records_config.update( + { + 'proxy.config.exec_thread.autoconfig.enabled': 0, + 'proxy.config.exec_thread.autoconfig.scale': 1.5, + 'proxy.config.exec_thread.limit': 32, + 'proxy.config.accept_threads': 1, + 'proxy.config.task_threads': 2, + 'proxy.config.diags.debug.enabled': 1, + 'proxy.config.diags.debug.tags': 'TSContSchedule_test' + }) + +ts.Setup.Copy('entire_pool.py') + +# Load plugin +Test.PrepareTestPlugin(os.path.join(Test.Variables.AtsTestPluginsDir, 'cont_schedule.so'), ts, 'entire') + +# www.example.com Host +tr = Test.AddTestRun() +tr.Processes.Default.Command = 'printf "Test TSContScheduleOnEntirePool API"' +tr.Processes.Default.ReturnCode = 0 +tr.Processes.Default.StartBefore(ts) + +tr = Test.AddTestRun("Wait traffic.out to be written") +timeout = 30 +watcher = tr.Processes.Process("watcher") +watcher.Command = f"sleep {timeout}" +watcher.Ready = When.FileExists(ts.Disk.traffic_out.AbsPath) +watcher.TimeOut = timeout +tr.TimeOut = timeout +tr.DelayStart = 2 +tr.Processes.Default.StartBefore(watcher) +tr.Processes.Default.Command = f'{sys.executable} entire_pool.py {ts.Disk.traffic_out.AbsPath} ET_NET 32 1' +tr.Processes.Default.ReturnCode = 0 +tr.Processes.Default.Streams.All = "gold/schedule_on_entire_pool.gold" +tr.Processes.Default.Streams.All += Testers.ExcludesExpression('fail', 'should not contain "fail"') diff --git a/tests/tools/plugins/cont_schedule.cc b/tests/tools/plugins/cont_schedule.cc index 04f28689b73..b5ab12db578 100644 --- a/tests/tools/plugins/cont_schedule.cc +++ b/tests/tools/plugins/cont_schedule.cc @@ -47,11 +47,57 @@ static TSEventThread thread_2 = nullptr; static TSCont contp_1 = nullptr; static TSCont contp_2 = nullptr; +static int TSContThreadAffinity_handler(TSCont contp, TSEvent event, void *edata); static int TSContScheduleOnPool_handler_1(TSCont contp, TSEvent event, void *edata); static int TSContScheduleOnPool_handler_2(TSCont contp, TSEvent event, void *edata); static int TSContScheduleOnThread_handler_1(TSCont contp, TSEvent event, void *edata); static int TSContScheduleOnThread_handler_2(TSCont contp, TSEvent event, void *edata); -static int TSContThreadAffinity_handler(TSCont contp, TSEvent event, void *edata); +static int TSContScheduleOnEntirePool_handler(TSCont contp, TSEvent event, void *edata); +static int TSContScheduleEveryOnPool_handler(TSCont contp, TSEvent event, void *edata); +static int TSContScheduleEveryOnThread_handler(TSCont contp, TSEvent event, void *edata); +static int TSContScheduleEveryOnEntirePool_handler(TSCont contp, TSEvent event, void *edata); + +static int +TSContThreadAffinity_handler(TSCont contp, TSEvent event, void *edata) +{ + Dbg(dbg_ctl_hdl, "TSContThreadAffinity handler thread [%p]", TSThreadSelf()); + + thread_1 = TSEventThreadSelf(); + + if (TSContThreadAffinityGet(contp) != nullptr) { + Dbg(dbg_ctl_chk, "pass [affinity thread is not null]"); + TSContThreadAffinityClear(contp); + if (TSContThreadAffinityGet(contp) == nullptr) { + Dbg(dbg_ctl_chk, "pass [affinity thread is cleared]"); + TSContThreadAffinitySet(contp, TSEventThreadSelf()); + if (TSContThreadAffinityGet(contp) == thread_1) { + Dbg(dbg_ctl_chk, "pass [affinity thread is set]"); + } else { + Dbg(dbg_ctl_chk, "fail [affinity thread is not set]"); + } + } else { + Dbg(dbg_ctl_chk, "fail [affinity thread is not cleared]"); + } + } else { + Dbg(dbg_ctl_chk, "fail [affinity thread is null]"); + } + + return 0; +} + +void +TSContThreadAffinity_test() +{ + TSCont contp = TSContCreate(TSContThreadAffinity_handler, TSMutexCreate()); + + if (contp == nullptr) { + Dbg(dbg_ctl_schd, "[%s] could not create continuation", plugin_name); + abort(); + } else { + Dbg(dbg_ctl_schd, "[%s] scheduling continuation", plugin_name); + TSContScheduleOnPool(contp, 0, TS_THREAD_POOL_NET); + } +} static int TSContScheduleOnPool_handler_1(TSCont contp, TSEvent event, void *edata) @@ -171,44 +217,106 @@ TSContScheduleOnThread_test() } static int -TSContThreadAffinity_handler(TSCont contp, TSEvent event, void *edata) +TSContScheduleOnEntirePool_handler(TSCont contp, TSEvent event, void *edata) { - Dbg(dbg_ctl_hdl, "TSContThreadAffinity handler thread [%p]", TSThreadSelf()); + Dbg(dbg_ctl_hdl, "TSContScheduleOnEntirePool handler thread [%p]", TSThreadSelf()); + return 0; +} - thread_1 = TSEventThreadSelf(); +void +TSContScheduleOnEntirePool_test() +{ + TSCont contp = TSContCreate(TSContScheduleOnEntirePool_handler, nullptr); - if (TSContThreadAffinityGet(contp) != nullptr) { - Dbg(dbg_ctl_chk, "pass [affinity thread is not null]"); - TSContThreadAffinityClear(contp); - if (TSContThreadAffinityGet(contp) == nullptr) { - Dbg(dbg_ctl_chk, "pass [affinity thread is cleared]"); - TSContThreadAffinitySet(contp, TSEventThreadSelf()); - if (TSContThreadAffinityGet(contp) == thread_1) { - Dbg(dbg_ctl_chk, "pass [affinity thread is set]"); - } else { - Dbg(dbg_ctl_chk, "fail [affinity thread is not set]"); - } + if (contp == nullptr) { + Dbg(dbg_ctl_schd, "[%s] could not create continuation", plugin_name); + abort(); + } else { + Dbg(dbg_ctl_schd, "[%s] scheduling continuation", plugin_name); + TSContScheduleOnEntirePool(contp, 0, TS_THREAD_POOL_NET); + } +} + +static int +TSContScheduleEveryOnPool_handler(TSCont contp, TSEvent event, void *edata) +{ + Dbg(dbg_ctl_hdl, "TSContScheduleEveryOnPool handler thread [%p]", TSThreadSelf()); + + if (thread_1 == nullptr) { + // First time here, record thread id. + thread_1 = TSEventThreadSelf(); + } else { + // Second time here, we should be on a different thread since affinity was cleared. + if (thread_1 == TSEventThreadSelf()) { + Dbg(dbg_ctl_chk, "pass [should be the same thread]"); } else { - Dbg(dbg_ctl_chk, "fail [affinity thread is not cleared]"); + Dbg(dbg_ctl_chk, "fail [not on the same thread]"); } + } + return 0; +} + +void +TSContScheduleEveryOnPool_test() +{ + TSCont contp = TSContCreate(TSContScheduleEveryOnPool_handler, TSMutexCreate()); + + if (contp == nullptr) { + Dbg(dbg_ctl_schd, "[%s] could not create continuation", plugin_name); + abort(); } else { - Dbg(dbg_ctl_chk, "fail [affinity thread is null]"); + Dbg(dbg_ctl_schd, "[%s] scheduling continuation", plugin_name); + TSContScheduleEveryOnPool(contp, 900, TS_THREAD_POOL_NET); } +} +static int +TSContScheduleEveryOnThread_handler(TSCont contp, TSEvent event, void *edata) +{ + Dbg(dbg_ctl_hdl, "TSContScheduleEveryOnThread handler thread [%p]", TSThreadSelf()); + + if (thread_1 == TSEventThreadSelf()) { + Dbg(dbg_ctl_chk, "pass [should be the same thread]"); + } else { + Dbg(dbg_ctl_chk, "fail [not on the same thread]"); + } return 0; } void -TSContThreadAffinity_test() +TSContScheduleEveryOnThread_test() { - TSCont contp = TSContCreate(TSContThreadAffinity_handler, TSMutexCreate()); + TSCont contp = TSContCreate(TSContScheduleEveryOnThread_handler, TSMutexCreate()); + + thread_1 = TSEventThreadSelf(); if (contp == nullptr) { Dbg(dbg_ctl_schd, "[%s] could not create continuation", plugin_name); abort(); } else { Dbg(dbg_ctl_schd, "[%s] scheduling continuation", plugin_name); - TSContScheduleOnPool(contp, 0, TS_THREAD_POOL_NET); + TSContScheduleEveryOnThread(contp, 900, thread_1); + } +} + +static int +TSContScheduleEveryOnEntirePool_handler(TSCont contp, TSEvent event, void *edata) +{ + Dbg(dbg_ctl_hdl, "TSContScheduleEveryOnEntirePool handler thread [%p]", TSThreadSelf()); + return 0; +} + +void +TSContScheduleEveryOnEntirePool_test() +{ + TSCont contp = TSContCreate(TSContScheduleEveryOnEntirePool_handler, nullptr); + + if (contp == nullptr) { + Dbg(dbg_ctl_schd, "[%s] could not create continuation", plugin_name); + abort(); + } else { + Dbg(dbg_ctl_schd, "[%s] scheduling continuation", plugin_name); + TSContScheduleEveryOnEntirePool(contp, 900, TS_THREAD_POOL_NET); } } @@ -218,13 +326,25 @@ LifecycleHookTracer(TSCont contp, TSEvent event, void *edata) if (event == TS_EVENT_LIFECYCLE_TASK_THREADS_READY) { switch (test_flag) { case 1: - TSContScheduleOnPool_test(); + TSContThreadAffinity_test(); break; case 2: - TSContScheduleOnThread_test(); + TSContScheduleOnPool_test(); break; case 3: - TSContThreadAffinity_test(); + TSContScheduleOnThread_test(); + break; + case 4: + TSContScheduleOnEntirePool_test(); + break; + case 5: + TSContScheduleEveryOnPool_test(); + break; + case 6: + TSContScheduleEveryOnThread_test(); + break; + case 7: + TSContScheduleEveryOnEntirePool_test(); break; default: break; @@ -238,15 +358,27 @@ TSPluginInit(int argc, const char *argv[]) { if (argc == 2) { int len = strlen(argv[1]); - if (len == 4 && strncmp(argv[1], "pool", 4) == 0) { - Dbg(dbg_ctl_init, "initializing plugin for testing TSContScheduleOnPool"); + if (len == 8 && strncmp(argv[1], "affinity", 8) == 0) { + Dbg(dbg_ctl_init, "initializing plugin for testing TSContThreadAffinity"); test_flag = 1; + } else if (len == 4 && strncmp(argv[1], "pool", 4) == 0) { + Dbg(dbg_ctl_init, "initializing plugin for testing TSContScheduleOnPool"); + test_flag = 2; } else if (len == 6 && strncmp(argv[1], "thread", 6) == 0) { Dbg(dbg_ctl_init, "initializing plugin for testing TSContScheduleOnThread"); - test_flag = 2; - } else if (len == 8 && strncmp(argv[1], "affinity", 8) == 0) { - Dbg(dbg_ctl_init, "initializing plugin for testing TSContThreadAffinity"); test_flag = 3; + } else if (len == 6 && strncmp(argv[1], "entire", 6) == 0) { + Dbg(dbg_ctl_init, "initializing plugin for testing TSContScheduleOnEntirePool"); + test_flag = 4; + } else if (len == 10 && strncmp(argv[1], "every_pool", 10) == 0) { + Dbg(dbg_ctl_init, "initializing plugin for testing TSContScheduleEveryOnPool"); + test_flag = 5; + } else if (len == 12 && strncmp(argv[1], "every_thread", 12) == 0) { + Dbg(dbg_ctl_init, "initializing plugin for testing TSContScheduleEveryOnThread"); + test_flag = 6; + } else if (len == 12 && strncmp(argv[1], "every_entire", 12) == 0) { + Dbg(dbg_ctl_init, "initializing plugin for testing TSContScheduleEveryOnEntirePool"); + test_flag = 7; } else { goto Lerror; }