Intel(R) Threading Building Blocks Doxygen Documentation version 4.2.3
tbb::internal::generic_scheduler Class Referenceabstract

Work stealing task scheduler. More...

#include <scheduler.h>

Inheritance diagram for tbb::internal::generic_scheduler:
Collaboration diagram for tbb::internal::generic_scheduler:

Public Member Functions

bool is_task_pool_published () const
 
bool is_local_task_pool_quiescent () const
 
bool is_quiescent_local_task_pool_empty () const
 
bool is_quiescent_local_task_pool_reset () const
 
void attach_mailbox (affinity_id id)
 
void init_stack_info ()
 Sets up the data necessary for the stealing limiting heuristics. More...
 
bool can_steal ()
 Returns true if stealing is allowed. More...
 
void publish_task_pool ()
 Used by workers to enter the task pool. More...
 
void leave_task_pool ()
 Leave the task pool. More...
 
void reset_task_pool_and_leave ()
 Resets head and tail indices to 0, and leaves task pool. More...
 
task ** lock_task_pool (arena_slot *victim_arena_slot) const
 Locks victim's task pool, and returns pointer to it. The pointer can be NULL. More...
 
void unlock_task_pool (arena_slot *victim_arena_slot, task **victim_task_pool) const
 Unlocks victim's task pool. More...
 
void acquire_task_pool () const
 Locks the local task pool. More...
 
void release_task_pool () const
 Unlocks the local task pool. More...
 
taskprepare_for_spawning (task *t)
 Checks if t is affinitized to another thread, and if so, bundles it as proxy. More...
 
void commit_spawned_tasks (size_t new_tail)
 Makes newly spawned tasks visible to thieves. More...
 
void commit_relocated_tasks (size_t new_tail)
 Makes relocated tasks visible to thieves and releases the local task pool. More...
 
taskget_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Get a task from the local pool. More...
 
taskget_task (size_t T)
 Get a task from the local pool at specified location T. More...
 
taskget_mailbox_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempt to get a task from the mailbox. More...
 
tasksteal_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempts to steal a task from a randomly chosen thread/scheduler. More...
 
tasksteal_task_from (__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
 Steal task from another scheduler's ready pool. More...
 
size_t prepare_task_pool (size_t n)
 Makes sure that the task pool can accommodate at least n more elements. More...
 
bool cleanup_master (bool blocking_terminate)
 Perform necessary cleanup when a master thread stops using TBB. More...
 
void assert_task_pool_valid () const
 
void attach_arena (arena *, size_t index, bool is_master)
 
void nested_arena_entry (arena *, size_t)
 
void nested_arena_exit ()
 
void wait_until_empty ()
 
void spawn (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void spawn_root_and_wait (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void enqueue (task &, void *reserved) __TBB_override
 For internal use only. More...
 
void local_spawn (task *first, task *&next)
 
void local_spawn_root_and_wait (task *first, task *&next)
 
virtual void local_wait_for_all (task &parent, task *child)=0
 
void destroy ()
 Destroy and deallocate this scheduler object. More...
 
void cleanup_scheduler ()
 Cleans up this scheduler (the scheduler might be destroyed). More...
 
taskallocate_task (size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
 Allocate task object, either from the heap or a free list. More...
 
template<free_task_hint h>
void free_task (task &t)
 Put task on free list. More...
 
void deallocate_task (task &t)
 Return task object to the memory allocator. More...
 
bool is_worker () const
 True if running on a worker thread, false otherwise. More...
 
bool outermost_level () const
 True if the scheduler is on the outermost dispatch level. More...
 
bool master_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a master thread. More...
 
bool worker_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a worker thread. More...
 
unsigned max_threads_in_arena ()
 Returns the concurrency limit of the current arena. More...
 
virtual taskreceive_or_steal_task (__TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation))=0
 Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption). More...
 
void free_nonlocal_small_task (task &t)
 Free a small task t that that was allocated by a different scheduler. More...
 
- Public Member Functions inherited from tbb::internal::scheduler
virtual void spawn (task &first, task *&next)=0
 For internal use only. More...
 
virtual void wait_for_all (task &parent, task *child)=0
 For internal use only. More...
 
virtual void spawn_root_and_wait (task &first, task *&next)=0
 For internal use only. More...
 
virtual ~scheduler ()=0
 Pure virtual destructor;. More...
 
virtual void enqueue (task &t, void *reserved)=0
 For internal use only. More...
 

Static Public Member Functions

static bool is_version_3_task (task &t)
 
static bool is_proxy (const task &t)
 True if t is a task_proxy. More...
 
static generic_schedulercreate_master (arena *a)
 Initialize a scheduler for a master thread. More...
 
static generic_schedulercreate_worker (market &m, size_t index, bool geniune)
 Initialize a scheduler for a worker thread. More...
 
static void cleanup_worker (void *arg, bool worker)
 Perform necessary cleanup when a worker thread finishes. More...
 
static taskplugged_return_list ()
 Special value used to mark my_return_list as not taking any more entries. More...
 

Public Attributes

uintptr_t my_stealing_threshold
 Position in the call stack specifying its maximal filling when stealing is still allowed. More...
 
marketmy_market
 The market I am in. More...
 
FastRandom my_random
 Random number generator used for picking a random victim from which to steal. More...
 
taskmy_free_list
 Free list of small tasks that can be reused. More...
 
taskmy_dummy_task
 Fake root task created by slave threads. More...
 
long my_ref_count
 Reference count for scheduler. More...
 
bool my_auto_initialized
 True if *this was created by automatic TBB initialization. More...
 
__TBB_atomic intptr_t my_small_task_count
 Number of small tasks that have been allocated by this scheduler. More...
 
taskmy_return_list
 List of small tasks that have been returned to this scheduler by other schedulers. More...
 
- Public Attributes inherited from tbb::internal::intrusive_list_node
intrusive_list_nodemy_prev_node
 
intrusive_list_nodemy_next_node
 
- Public Attributes inherited from tbb::internal::scheduler_state
size_t my_arena_index
 Index of the arena slot the scheduler occupies now, or occupied last time. More...
 
arena_slotmy_arena_slot
 Pointer to the slot in the arena we own at the moment. More...
 
arenamy_arena
 The arena that I own (if master) or am servicing at the moment (if worker) More...
 
taskmy_innermost_running_task
 Innermost task whose task::execute() is running. A dummy task on the outermost level. More...
 
mail_inbox my_inbox
 
affinity_id my_affinity_id
 The mailbox id assigned to this scheduler. More...
 
scheduler_properties my_properties
 

Static Public Attributes

static const size_t quick_task_size = 256-task_prefix_reservation_size
 If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd. More...
 
static const size_t null_arena_index = ~size_t(0)
 
static const size_t min_task_pool_size = 64
 

Protected Member Functions

 generic_scheduler (market &, bool)
 

Friends

template<typename SchedulerTraits >
class custom_scheduler
 

Detailed Description

Work stealing task scheduler.

None of the fields here are ever read or written by threads other than the thread that creates the instance.

Class generic_scheduler is an abstract base class that contains most of the scheduler, except for tweaks specific to processors and tools (e.g. VTune(TM) Performance Tools). The derived template class custom_scheduler<SchedulerTraits> fills in the tweaks.

Definition at line 137 of file scheduler.h.

Constructor & Destructor Documentation

◆ generic_scheduler()

tbb::internal::generic_scheduler::generic_scheduler ( market m,
bool  genuine 
)
protected

Definition at line 84 of file scheduler.cpp.

85 : my_market(&m)
86 , my_random(this)
87 , my_ref_count(1)
88#if __TBB_PREVIEW_RESUMABLE_TASKS
89 , my_co_context(m.worker_stack_size(), genuine ? NULL : this)
90#endif
91 , my_small_task_count(1) // Extra 1 is a guard reference
92#if __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT
93 , my_cilk_state(cs_none)
94#endif /* __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT */
95{
96 __TBB_ASSERT( !my_arena_index, "constructor expects the memory being zero-initialized" );
97 __TBB_ASSERT( governor::is_set(NULL), "scheduler is already initialized for this thread" );
98
99 my_innermost_running_task = my_dummy_task = &allocate_task( sizeof(task), __TBB_CONTEXT_ARG(NULL, &the_dummy_context) );
100#if __TBB_PREVIEW_CRITICAL_TASKS
101 my_properties.has_taken_critical_task = false;
102#endif
103#if __TBB_PREVIEW_RESUMABLE_TASKS
104 my_properties.genuine = genuine;
105 my_current_is_recalled = NULL;
106 my_post_resume_action = PRA_NONE;
107 my_post_resume_arg = NULL;
108 my_wait_task = NULL;
109#else
111#endif
113#if __TBB_TASK_PRIORITY
114 my_ref_top_priority = &m.my_global_top_priority;
115 my_ref_reload_epoch = &m.my_global_reload_epoch;
116#endif /* __TBB_TASK_PRIORITY */
117#if __TBB_TASK_GROUP_CONTEXT
118 // Sync up the local cancellation state with the global one. No need for fence here.
119 my_context_state_propagation_epoch = the_context_state_propagation_epoch;
120 my_context_list_head.my_prev = &my_context_list_head;
121 my_context_list_head.my_next = &my_context_list_head;
122 ITT_SYNC_CREATE(&my_context_list_mutex, SyncType_Scheduler, SyncObj_ContextsList);
123#endif /* __TBB_TASK_GROUP_CONTEXT */
124 ITT_SYNC_CREATE(&my_dummy_task->prefix().ref_count, SyncType_Scheduler, SyncObj_WorkerLifeCycleMgmt);
125 ITT_SYNC_CREATE(&my_return_list, SyncType_Scheduler, SyncObj_TaskReturnList);
126}
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_SYNC_CREATE(obj, type, name)
Definition: itt_notify.h:115
#define __TBB_CONTEXT_ARG(arg1, context)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
void suppress_unused_warning(const T1 &)
Utility template function to prevent "unused" warnings by various compilers.
Definition: tbb_stddef.h:398
__TBB_atomic reference_count ref_count
Reference count used for synchronization.
Definition: task.h:274
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:1002
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
bool outermost
Indicates that a scheduler is on outermost level.
Definition: scheduler.h:57
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:88
scheduler_properties my_properties
Definition: scheduler.h:101
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:461
long my_ref_count
Reference count for scheduler.
Definition: scheduler.h:190
task & allocate_task(size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
Allocate task object, either from the heap or a free list.
Definition: scheduler.cpp:337
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:465
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:186
market * my_market
The market I am in.
Definition: scheduler.h:172
FastRandom my_random
Random number generator used for picking a random victim from which to steal.
Definition: scheduler.h:175

References __TBB_ASSERT, __TBB_CONTEXT_ARG, allocate_task(), tbb::internal::governor::is_set(), ITT_SYNC_CREATE, tbb::internal::scheduler_state::my_arena_index, my_dummy_task, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::scheduler_state::my_properties, my_return_list, tbb::internal::scheduler_properties::outermost, tbb::task::prefix(), tbb::internal::task_prefix::ref_count, and tbb::internal::suppress_unused_warning().

Here is the call graph for this function:

Member Function Documentation

◆ acquire_task_pool()

void tbb::internal::generic_scheduler::acquire_task_pool ( ) const
inline

Locks the local task pool.

Garbles my_arena_slot->task_pool for the duration of the lock. Requires correctly set my_arena_slot->task_pool_ptr.

ATTENTION: This method is mostly the same as generic_scheduler::lock_task_pool(), with a little different logic of slot state checks (slot is either locked or points to our task pool). Thus if either of them is changed, consider changing the counterpart as well.

Definition at line 493 of file scheduler.cpp.

493 {
494 if ( !is_task_pool_published() )
495 return; // we are not in arena - nothing to lock
496 bool sync_prepare_done = false;
497 for( atomic_backoff b;;b.pause() ) {
498#if TBB_USE_ASSERT
499 __TBB_ASSERT( my_arena_slot == my_arena->my_slots + my_arena_index, "invalid arena slot index" );
500 // Local copy of the arena slot task pool pointer is necessary for the next
501 // assertion to work correctly to exclude asynchronous state transition effect.
503 __TBB_ASSERT( tp == LockedTaskPool || tp == my_arena_slot->task_pool_ptr, "slot ownership corrupt?" );
504#endif
507 {
508 // We acquired our own slot
509 ITT_NOTIFY(sync_acquired, my_arena_slot);
510 break;
511 }
512 else if( !sync_prepare_done ) {
513 // Start waiting
514 ITT_NOTIFY(sync_prepare, my_arena_slot);
515 sync_prepare_done = true;
516 }
517 // Someone else acquired a lock, so pause and do exponential backoff.
518 }
519 __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "not really acquired task pool" );
520} // generic_scheduler::acquire_task_pool
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112
#define LockedTaskPool
Definition: scheduler.h:47
atomic< T > & as_atomic(T &t)
Definition: atomic.h:572
arena_slot my_slots[1]
Definition: arena.h:390
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.

References __TBB_ASSERT, tbb::internal::as_atomic(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, tbb::internal::atomic_backoff::pause(), tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), get_task(), and prepare_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ allocate_task()

task & tbb::internal::generic_scheduler::allocate_task ( size_t  number_of_bytes,
__TBB_CONTEXT_ARG(task *parent, task_group_context *context)   
)

Allocate task object, either from the heap or a free list.

Returns uninitialized task object with initialized prefix.

Definition at line 337 of file scheduler.cpp.

338 {
339 GATHER_STATISTIC(++my_counters.active_tasks);
340 task *t;
341 if( number_of_bytes<=quick_task_size ) {
342#if __TBB_HOARD_NONLOCAL_TASKS
343 if( (t = my_nonlocal_free_list) ) {
344 GATHER_STATISTIC(--my_counters.free_list_length);
345 __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
346 my_nonlocal_free_list = t->prefix().next;
347 } else
348#endif
349 if( (t = my_free_list) ) {
350 GATHER_STATISTIC(--my_counters.free_list_length);
351 __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
352 my_free_list = t->prefix().next;
353 } else if( my_return_list ) {
354 // No fence required for read of my_return_list above, because __TBB_FetchAndStoreW has a fence.
355 t = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 ); // with acquire
356 __TBB_ASSERT( t, "another thread emptied the my_return_list" );
357 __TBB_ASSERT( t->prefix().origin==this, "task returned to wrong my_return_list" );
358 ITT_NOTIFY( sync_acquired, &my_return_list );
359 my_free_list = t->prefix().next;
360 } else {
362#if __TBB_COUNT_TASK_NODES
363 ++my_task_node_count;
364#endif /* __TBB_COUNT_TASK_NODES */
365 t->prefix().origin = this;
366 t->prefix().next = 0;
368 }
369#if __TBB_PREFETCHING
370 task *t_next = t->prefix().next;
371 if( !t_next ) { // the task was last in the list
372#if __TBB_HOARD_NONLOCAL_TASKS
373 if( my_free_list )
374 t_next = my_free_list;
375 else
376#endif
377 if( my_return_list ) // enable prefetching, gives speedup
378 t_next = my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 );
379 }
380 if( t_next ) { // gives speedup for both cache lines
381 __TBB_cl_prefetch(t_next);
382 __TBB_cl_prefetch(&t_next->prefix());
383 }
384#endif /* __TBB_PREFETCHING */
385 } else {
386 GATHER_STATISTIC(++my_counters.big_tasks);
387 t = (task*)((char*)NFS_Allocate( 1, task_prefix_reservation_size+number_of_bytes, NULL ) + task_prefix_reservation_size );
388#if __TBB_COUNT_TASK_NODES
389 ++my_task_node_count;
390#endif /* __TBB_COUNT_TASK_NODES */
391 t->prefix().origin = NULL;
392 }
393 task_prefix& p = t->prefix();
394#if __TBB_TASK_GROUP_CONTEXT
395 p.context = context;
396#endif /* __TBB_TASK_GROUP_CONTEXT */
397 // Obsolete. But still in use, so has to be assigned correct value here.
398 p.owner = this;
399 p.ref_count = 0;
400 // Obsolete. Assign some not outrageously out-of-place value for a while.
401 p.depth = 0;
402 p.parent = parent;
403 // In TBB 2.1 and later, the constructor for task sets extra_state to indicate the version of the tbb/task.h header.
404 // In TBB 2.0 and earlier, the constructor leaves extra_state as zero.
405 p.extra_state = 0;
406 p.affinity = 0;
407 p.state = task::allocated;
408 __TBB_ISOLATION_EXPR( p.isolation = no_isolation );
409 return *t;
410}
#define __TBB_cl_prefetch(p)
Definition: mic_common.h:33
#define __TBB_ISOLATION_EXPR(isolation)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id parent
void const char const char int ITT_FORMAT __itt_group_sync p
#define GATHER_STATISTIC(x)
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
const size_t task_prefix_reservation_size
Number of bytes reserved for a task prefix.
const isolation_tag no_isolation
Definition: task.h:144
tbb::task * next
"next" field for list of task
Definition: task.h:297
scheduler * origin
The scheduler that allocated the task, or NULL if the task is big.
Definition: task.h:239
@ allocated
task object is freshly allocated or recycled.
Definition: task.h:643
@ freed
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:645
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:178
static const size_t quick_task_size
If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd.
Definition: scheduler.h:144

References __TBB_ASSERT, __TBB_cl_prefetch, __TBB_ISOLATION_EXPR, tbb::task::allocated, tbb::task::freed, GATHER_STATISTIC, ITT_NOTIFY, my_free_list, my_return_list, my_small_task_count, tbb::internal::task_prefix::next, tbb::internal::NFS_Allocate(), tbb::internal::no_isolation, tbb::internal::task_prefix::origin, p, parent, tbb::task::prefix(), quick_task_size, tbb::task::state(), and tbb::internal::task_prefix_reservation_size.

Referenced by tbb::internal::allocate_root_proxy::allocate(), generic_scheduler(), and prepare_for_spawning().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ assert_task_pool_valid()

void tbb::internal::generic_scheduler::assert_task_pool_valid ( ) const
inline

Definition at line 398 of file scheduler.h.

398{}

Referenced by local_spawn(), prepare_task_pool(), and tbb::task::self().

Here is the caller graph for this function:

◆ attach_arena()

void tbb::internal::generic_scheduler::attach_arena ( arena a,
size_t  index,
bool  is_master 
)

Definition at line 80 of file arena.cpp.

80 {
81 __TBB_ASSERT( a->my_market == my_market, NULL );
82 my_arena = a;
83 my_arena_index = index;
84 my_arena_slot = a->my_slots + index;
85 attach_mailbox( affinity_id(index+1) );
86 if ( is_master && my_inbox.is_idle_state( true ) ) {
87 // Master enters an arena with its own task to be executed. It means that master is not
88 // going to enter stealing loop and take affinity tasks.
89 my_inbox.set_is_idle( false );
90 }
91#if __TBB_TASK_GROUP_CONTEXT
92 // Context to be used by root tasks by default (if the user has not specified one).
93 if( !is_master )
94 my_dummy_task->prefix().context = a->my_default_ctx;
95#endif /* __TBB_TASK_GROUP_CONTEXT */
96#if __TBB_TASK_PRIORITY
97 // In the current implementation master threads continue processing even when
98 // there are other masters with higher priority. Only TBB worker threads are
99 // redistributed between arenas based on the latters' priority. Thus master
100 // threads use arena's top priority as a reference point (in contrast to workers
101 // that use my_market->my_global_top_priority).
102 if( is_master ) {
103 my_ref_top_priority = &a->my_top_priority;
104 my_ref_reload_epoch = &a->my_reload_epoch;
105 }
106 my_local_reload_epoch = *my_ref_reload_epoch;
107 __TBB_ASSERT( !my_offloaded_tasks, NULL );
108#endif /* __TBB_TASK_PRIORITY */
109}
unsigned short affinity_id
An id as used for specifying affinity.
Definition: task.h:139
task_group_context * context
Shared context that is used to communicate asynchronous state changes.
Definition: task.h:230
void set_is_idle(bool value)
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:222
bool is_idle_state(bool value) const
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:229
void attach_mailbox(affinity_id id)
Definition: scheduler.h:667

References __TBB_ASSERT, attach_mailbox(), tbb::internal::task_prefix::context, tbb::internal::mail_inbox::is_idle_state(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, tbb::internal::scheduler_state::my_inbox, tbb::internal::arena_base::my_market, my_market, tbb::internal::arena::my_slots, tbb::task::prefix(), and tbb::internal::mail_inbox::set_is_idle().

Referenced by nested_arena_entry().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ attach_mailbox()

void tbb::internal::generic_scheduler::attach_mailbox ( affinity_id  id)
inline

Definition at line 667 of file scheduler.h.

667 {
668 __TBB_ASSERT(id>0,NULL);
671}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id id
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:305
void attach(mail_outbox &putter)
Attach inbox to a corresponding outbox.
Definition: mailbox.h:204
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99

References __TBB_ASSERT, tbb::internal::mail_inbox::attach(), id, tbb::internal::arena::mailbox(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, and tbb::internal::scheduler_state::my_inbox.

Referenced by attach_arena().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ can_steal()

bool tbb::internal::generic_scheduler::can_steal ( )
inline

Returns true if stealing is allowed.

Definition at line 270 of file scheduler.h.

270 {
271 int anchor;
272 // TODO IDEA: Add performance warning?
273#if __TBB_ipf
274 return my_stealing_threshold < (uintptr_t)&anchor && (uintptr_t)__TBB_get_bsp() < my_rsb_stealing_threshold;
275#else
276 return my_stealing_threshold < (uintptr_t)&anchor;
277#endif
278 }
void * __TBB_get_bsp()
Retrieves the current RSE backing store pointer. IA64 specific.
uintptr_t my_stealing_threshold
Position in the call stack specifying its maximal filling when stealing is still allowed.
Definition: scheduler.h:155

References __TBB_get_bsp(), and my_stealing_threshold.

Here is the call graph for this function:

◆ cleanup_master()

bool tbb::internal::generic_scheduler::cleanup_master ( bool  blocking_terminate)

Perform necessary cleanup when a master thread stops using TBB.

Definition at line 1341 of file scheduler.cpp.

1341 {
1342 arena* const a = my_arena;
1343 market * const m = my_market;
1344 __TBB_ASSERT( my_market, NULL );
1345 if( a && is_task_pool_published() ) {
1349 {
1350 // Local task pool is empty
1352 }
1353 else {
1354 // Master's local task pool may e.g. contain proxies of affinitized tasks.
1356 __TBB_ASSERT ( governor::is_set(this), "TLS slot is cleared before the task pool cleanup" );
1357 // Set refcount to make the following dispach loop infinite (it is interrupted by the cleanup logic).
1361 __TBB_ASSERT ( governor::is_set(this), "Other thread reused our TLS key during the task pool cleanup" );
1362 }
1363 }
1364#if __TBB_ARENA_OBSERVER
1365 if( a )
1366 a->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
1367#endif
1368#if __TBB_SCHEDULER_OBSERVER
1369 the_global_observer_list.notify_exit_observers( my_last_global_observer, /*worker=*/false );
1370#endif /* __TBB_SCHEDULER_OBSERVER */
1371#if _WIN32||_WIN64
1372 m->unregister_master( master_exec_resource );
1373#endif /* _WIN32||_WIN64 */
1374 if( a ) {
1375 __TBB_ASSERT(a->my_slots+0 == my_arena_slot, NULL);
1376#if __TBB_STATISTICS
1377 *my_arena_slot->my_counters += my_counters;
1378#endif /* __TBB_STATISTICS */
1380 }
1381#if __TBB_TASK_GROUP_CONTEXT
1382 else { // task_group_context ownership was not transferred to arena
1383 default_context()->~task_group_context();
1384 NFS_Free(default_context());
1385 }
1386 context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1387 my_market->my_masters.remove( *this );
1388 lock.release();
1389#endif /* __TBB_TASK_GROUP_CONTEXT */
1390 my_arena_slot = NULL; // detached from slot
1391 cleanup_scheduler(); // do not use scheduler state after this point
1392
1393 if( a )
1394 a->on_thread_leaving<arena::ref_external>();
1395 // If there was an associated arena, it added a public market reference
1396 return m->release( /*is_public*/ a != NULL, blocking_terminate );
1397}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void * lock
#define EmptyTaskPool
Definition: scheduler.h:46
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:735
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:713
void set_ref_count(int count)
Set reference count.
Definition: task.h:761
static const unsigned ref_external
Reference increment values for externals and workers.
Definition: arena.h:327
virtual void local_wait_for_all(task &parent, task *child)=0
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:522
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void cleanup_scheduler()
Cleans up this scheduler (the scheduler might be destroyed).
Definition: scheduler.cpp:294
void leave_task_pool()
Leave the task pool.
Definition: scheduler.cpp:1260
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:493
__TBB_atomic size_t head
Index of the first ready task in the deque.
generic_scheduler * my_scheduler
Scheduler of the thread attached to the slot.
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), cleanup_scheduler(), EmptyTaskPool, tbb::internal::arena_slot_line1::head, tbb::internal::governor::is_set(), is_task_pool_published(), leave_task_pool(), local_wait_for_all(), lock, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_market, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, tbb::internal::NFS_Free(), tbb::internal::arena::on_thread_leaving(), tbb::internal::arena::ref_external, tbb::internal::market::release(), release_task_pool(), tbb::task::set_ref_count(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line1::task_pool.

Here is the call graph for this function:

◆ cleanup_scheduler()

void tbb::internal::generic_scheduler::cleanup_scheduler ( )

Cleans up this scheduler (the scheduler might be destroyed).

Definition at line 294 of file scheduler.cpp.

294 {
295 __TBB_ASSERT( !my_arena_slot, NULL );
296#if __TBB_TASK_PRIORITY
297 __TBB_ASSERT( my_offloaded_tasks == NULL, NULL );
298#endif
299#if __TBB_PREVIEW_CRITICAL_TASKS
300 __TBB_ASSERT( !my_properties.has_taken_critical_task, "Critical tasks miscount." );
301#endif
302#if __TBB_TASK_GROUP_CONTEXT
303 cleanup_local_context_list();
304#endif /* __TBB_TASK_GROUP_CONTEXT */
305 free_task<small_local_task>( *my_dummy_task );
306
307#if __TBB_HOARD_NONLOCAL_TASKS
308 while( task* t = my_nonlocal_free_list ) {
309 task_prefix& p = t->prefix();
310 my_nonlocal_free_list = p.next;
311 __TBB_ASSERT( p.origin && p.origin!=this, NULL );
313 }
314#endif
315 // k accounts for a guard reference and each task that we deallocate.
316 intptr_t k = 1;
317 for(;;) {
318 while( task* t = my_free_list ) {
319 my_free_list = t->prefix().next;
320 deallocate_task(*t);
321 ++k;
322 }
324 break;
325 my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, (intptr_t)plugged_return_list() );
326 }
327#if __TBB_COUNT_TASK_NODES
328 my_market->update_task_node_count( my_task_node_count );
329#endif /* __TBB_COUNT_TASK_NODES */
330 // Update my_small_task_count last. Doing so sooner might cause another thread to free *this.
331 __TBB_ASSERT( my_small_task_count>=k, "my_small_task_count corrupted" );
332 governor::sign_off(this);
333 if( __TBB_FetchAndAddW( &my_small_task_count, -k )==k )
334 destroy();
335}
static void sign_off(generic_scheduler *s)
Unregister TBB scheduler instance from thread-local storage.
Definition: governor.cpp:145
static task * plugged_return_list()
Special value used to mark my_return_list as not taking any more entries.
Definition: scheduler.h:458
void free_nonlocal_small_task(task &t)
Free a small task t that that was allocated by a different scheduler.
Definition: scheduler.cpp:412
void destroy()
Destroy and deallocate this scheduler object.
Definition: scheduler.cpp:285
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:683

References __TBB_ASSERT, deallocate_task(), destroy(), free_nonlocal_small_task(), tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_free_list, my_market, tbb::internal::scheduler_state::my_properties, my_return_list, my_small_task_count, tbb::internal::task_prefix::next, p, plugged_return_list(), tbb::task::prefix(), and tbb::internal::governor::sign_off().

Referenced by cleanup_master().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ cleanup_worker()

void tbb::internal::generic_scheduler::cleanup_worker ( void arg,
bool  worker 
)
static

Perform necessary cleanup when a worker thread finishes.

Definition at line 1331 of file scheduler.cpp.

1331 {
1333 __TBB_ASSERT( !s.my_arena_slot, "cleaning up attached worker" );
1334#if __TBB_SCHEDULER_OBSERVER
1335 if ( worker ) // can be called by master for worker, do not notify master twice
1336 the_global_observer_list.notify_exit_observers( s.my_last_global_observer, /*worker=*/true );
1337#endif /* __TBB_SCHEDULER_OBSERVER */
1338 s.cleanup_scheduler();
1339}
void const char const char int ITT_FORMAT __itt_group_sync s

References __TBB_ASSERT, and s.

Referenced by tbb::internal::market::cleanup().

Here is the caller graph for this function:

◆ commit_relocated_tasks()

void tbb::internal::generic_scheduler::commit_relocated_tasks ( size_t  new_tail)
inline

Makes relocated tasks visible to thieves and releases the local task pool.

Obviously, the task pool must be locked when calling this method.

Definition at line 719 of file scheduler.h.

719 {
721 "Task pool must be locked when calling commit_relocated_tasks()" );
723 // Tail is updated last to minimize probability of a thread making arena
724 // snapshot being misguided into thinking that this task pool is empty.
727}
#define __TBB_store_release
Definition: tbb_machine.h:857
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:739
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:633

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), __TBB_store_release, tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), tbb::internal::scheduler_state::my_arena_slot, release_task_pool(), and tbb::internal::arena_slot_line2::tail.

Referenced by prepare_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ commit_spawned_tasks()

void tbb::internal::generic_scheduler::commit_spawned_tasks ( size_t  new_tail)
inline

Makes newly spawned tasks visible to thieves.

Definition at line 710 of file scheduler.h.

710 {
711 __TBB_ASSERT ( new_tail <= my_arena_slot->my_task_pool_size, "task deque end was overwritten" );
712 // emit "task was released" signal
713 ITT_NOTIFY(sync_releasing, (void*)((uintptr_t)my_arena_slot+sizeof(uintptr_t)));
714 // Release fence is necessary to make sure that previously stored task pointers
715 // are visible to thieves.
717}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, tbb::internal::scheduler_state::my_arena_slot, sync_releasing, and tbb::internal::arena_slot_line2::tail.

Referenced by local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_master()

generic_scheduler * tbb::internal::generic_scheduler::create_master ( arena a)
static

Initialize a scheduler for a master thread.

Definition at line 1287 of file scheduler.cpp.

1287 {
1288 // add an internal market reference; the public reference is possibly added in create_arena
1289 generic_scheduler* s = allocate_scheduler( market::global_market(/*is_public=*/false), /* genuine = */ true );
1290 __TBB_ASSERT( !s->my_arena, NULL );
1291 __TBB_ASSERT( s->my_market, NULL );
1292 task& t = *s->my_dummy_task;
1293 s->my_properties.type = scheduler_properties::master;
1294 t.prefix().ref_count = 1;
1295#if __TBB_TASK_GROUP_CONTEXT
1296 t.prefix().context = new ( NFS_Allocate(1, sizeof(task_group_context), NULL) )
1298#if __TBB_FP_CONTEXT
1299 s->default_context()->capture_fp_settings();
1300#endif
1301 // Do not call init_stack_info before the scheduler is set as master or worker.
1302 s->init_stack_info();
1303 context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1304 s->my_market->my_masters.push_front( *s );
1305 lock.release();
1306#endif /* __TBB_TASK_GROUP_CONTEXT */
1307 if( a ) {
1308 // Master thread always occupies the first slot
1309 s->attach_arena( a, /*index*/0, /*is_master*/true );
1310 s->my_arena_slot->my_scheduler = s;
1311#if __TBB_TASK_GROUP_CONTEXT
1312 a->my_default_ctx = s->default_context(); // also transfers implied ownership
1313#endif
1314 }
1315 __TBB_ASSERT( s->my_arena_index == 0, "Master thread must occupy the first slot in its arena" );
1317
1318#if _WIN32||_WIN64
1319 s->my_market->register_master( s->master_exec_resource );
1320#endif /* _WIN32||_WIN64 */
1321 // Process any existing observers.
1322#if __TBB_ARENA_OBSERVER
1323 __TBB_ASSERT( !a || a->my_observers.empty(), "Just created arena cannot have any observers associated with it" );
1324#endif
1325#if __TBB_SCHEDULER_OBSERVER
1326 the_global_observer_list.notify_entry_observers( s->my_last_global_observer, /*worker=*/false );
1327#endif /* __TBB_SCHEDULER_OBSERVER */
1328 return s;
1329}
generic_scheduler * allocate_scheduler(market &m, bool genuine)
Definition: scheduler.cpp:37
static void sign_on(generic_scheduler *s)
Register TBB scheduler instance in thread-local storage.
Definition: governor.cpp:124
static market & global_market(bool is_public, unsigned max_num_workers=0, size_t stack_size=0)
Factory method creating new market object.
Definition: market.cpp:96

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), tbb::internal::task_prefix::context, tbb::task_group_context::default_traits, tbb::internal::market::global_market(), tbb::task_group_context::isolated, lock, tbb::internal::scheduler_properties::master, tbb::internal::NFS_Allocate(), tbb::task::prefix(), tbb::internal::task_prefix::ref_count, s, and tbb::internal::governor::sign_on().

Referenced by tbb::internal::governor::init_scheduler(), and tbb::internal::governor::init_scheduler_weak().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_worker()

generic_scheduler * tbb::internal::generic_scheduler::create_worker ( market m,
size_t  index,
bool  geniune 
)
static

Initialize a scheduler for a worker thread.

Definition at line 1273 of file scheduler.cpp.

1273 {
1274 generic_scheduler* s = allocate_scheduler( m, genuine );
1275 __TBB_ASSERT(!genuine || index, "workers should have index > 0");
1276 s->my_arena_index = index; // index is not a real slot in arena yet
1277 s->my_dummy_task->prefix().ref_count = 2;
1278 s->my_properties.type = scheduler_properties::worker;
1279 // Do not call init_stack_info before the scheduler is set as master or worker.
1280 if (genuine)
1281 s->init_stack_info();
1283 return s;
1284}

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), s, tbb::internal::governor::sign_on(), and tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::create_one_job().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ deallocate_task()

void tbb::internal::generic_scheduler::deallocate_task ( task t)
inline

Return task object to the memory allocator.

Definition at line 683 of file scheduler.h.

683 {
684#if TBB_USE_ASSERT
685 task_prefix& p = t.prefix();
686 p.state = 0xFF;
687 p.extra_state = 0xFF;
688 poison_pointer(p.next);
689#endif /* TBB_USE_ASSERT */
691#if __TBB_COUNT_TASK_NODES
692 --my_task_node_count;
693#endif /* __TBB_COUNT_TASK_NODES */
694}
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305

References tbb::internal::NFS_Free(), p, tbb::internal::poison_pointer(), tbb::task::prefix(), and tbb::internal::task_prefix_reservation_size.

Referenced by cleanup_scheduler(), free_nonlocal_small_task(), and free_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ destroy()

void tbb::internal::generic_scheduler::destroy ( )

Destroy and deallocate this scheduler object.

Definition at line 285 of file scheduler.cpp.

285 {
286 __TBB_ASSERT(my_small_task_count == 0, "The scheduler is still in use.");
287 this->~generic_scheduler();
288#if TBB_USE_DEBUG
289 memset((void*)this, -1, sizeof(generic_scheduler));
290#endif
291 NFS_Free(this);
292}

References __TBB_ASSERT, my_small_task_count, and tbb::internal::NFS_Free().

Referenced by cleanup_scheduler().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ enqueue()

void tbb::internal::generic_scheduler::enqueue ( task t,
void reserved 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 749 of file scheduler.cpp.

749 {
751 // these redirections are due to bw-compatibility, consider reworking some day
752 __TBB_ASSERT( s->my_arena, "thread is not in any arena" );
753 s->my_arena->enqueue_task(t, (intptr_t)prio, s->my_random );
754}
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:129

References __TBB_ASSERT, tbb::internal::governor::local_scheduler(), and s.

Here is the call graph for this function:

◆ free_nonlocal_small_task()

void tbb::internal::generic_scheduler::free_nonlocal_small_task ( task t)

Free a small task t that that was allocated by a different scheduler.

Definition at line 412 of file scheduler.cpp.

412 {
413 __TBB_ASSERT( t.state()==task::freed, NULL );
414 generic_scheduler& s = *static_cast<generic_scheduler*>(t.prefix().origin);
415 __TBB_ASSERT( &s!=this, NULL );
416 for(;;) {
417 task* old = s.my_return_list;
418 if( old==plugged_return_list() )
419 break;
420 // Atomically insert t at head of s.my_return_list
421 t.prefix().next = old;
422 ITT_NOTIFY( sync_releasing, &s.my_return_list );
423 if( as_atomic(s.my_return_list).compare_and_swap(&t, old )==old ) {
424#if __TBB_PREFETCHING
425 __TBB_cl_evict(&t.prefix());
426 __TBB_cl_evict(&t);
427#endif
428 return;
429 }
430 }
432 if( __TBB_FetchAndDecrementWrelease( &s.my_small_task_count )==1 ) {
433 // We freed the last task allocated by scheduler s, so it's our responsibility
434 // to free the scheduler.
435 s.destroy();
436 }
437}
#define __TBB_FetchAndDecrementWrelease(P)
Definition: tbb_machine.h:311
#define __TBB_cl_evict(p)
Definition: mic_common.h:34

References __TBB_ASSERT, __TBB_cl_evict, __TBB_FetchAndDecrementWrelease, tbb::internal::as_atomic(), deallocate_task(), tbb::task::freed, ITT_NOTIFY, tbb::internal::task_prefix::next, tbb::internal::task_prefix::origin, plugged_return_list(), tbb::task::prefix(), s, tbb::task::state(), and sync_releasing.

Referenced by cleanup_scheduler(), and free_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ free_task()

template<free_task_hint hint>
void tbb::internal::generic_scheduler::free_task ( task t)

Put task on free list.

Does not call destructor.

Definition at line 730 of file scheduler.h.

730 {
731#if __TBB_HOARD_NONLOCAL_TASKS
732 static const int h = hint&(~local_task);
733#else
734 static const free_task_hint h = hint;
735#endif
736 GATHER_STATISTIC(--my_counters.active_tasks);
737 task_prefix& p = t.prefix();
738 // Verify that optimization hints are correct.
739 __TBB_ASSERT( h!=small_local_task || p.origin==this, NULL );
740 __TBB_ASSERT( !(h&small_task) || p.origin, NULL );
741 __TBB_ASSERT( !(h&local_task) || (!p.origin || uintptr_t(p.origin) > uintptr_t(4096)), "local_task means allocated");
742 poison_value(p.depth);
743 poison_value(p.ref_count);
744 poison_pointer(p.owner);
745#if __TBB_PREVIEW_RESUMABLE_TASKS
746 __TBB_ASSERT(1L << t.state() & (1L << task::executing | 1L << task::allocated | 1 << task::to_resume), NULL);
747#else
748 __TBB_ASSERT(1L << t.state() & (1L << task::executing | 1L << task::allocated), NULL);
749#endif
750 p.state = task::freed;
751 if( h==small_local_task || p.origin==this ) {
752 GATHER_STATISTIC(++my_counters.free_list_length);
753 p.next = my_free_list;
754 my_free_list = &t;
755 } else if( !(h&local_task) && p.origin && uintptr_t(p.origin) < uintptr_t(4096) ) {
756 // a special value reserved for future use, do nothing since
757 // origin is not pointing to a scheduler instance
758 } else if( !(h&local_task) && p.origin ) {
759 GATHER_STATISTIC(++my_counters.free_list_length);
760#if __TBB_HOARD_NONLOCAL_TASKS
761 if( !(h&no_cache) ) {
762 p.next = my_nonlocal_free_list;
763 my_nonlocal_free_list = &t;
764 } else
765#endif
767 } else {
768 GATHER_STATISTIC(--my_counters.big_tasks);
770 }
771}
#define poison_value(g)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function h
free_task_hint
Optimization hint to free_task that enables it omit unnecessary tests and code.
@ no_cache
Disable caching for a small task.
@ small_task
Task is known to be a small task.
@ local_task
Task is known to have been allocated by this scheduler.
@ small_local_task
Bitwise-OR of local_task and small_task.
@ executing
task is running, and will be destroyed after method execute() completes.
Definition: task.h:637

References __TBB_ASSERT, tbb::task::allocated, deallocate_task(), tbb::task::executing, free_nonlocal_small_task(), tbb::task::freed, GATHER_STATISTIC, h, tbb::internal::local_task, my_free_list, tbb::internal::no_cache, p, tbb::internal::poison_pointer(), poison_value, tbb::task::prefix(), tbb::internal::small_local_task, tbb::internal::small_task, and tbb::task::state().

Referenced by tbb::interface5::internal::task_base::destroy(), tbb::internal::allocate_root_proxy::free(), tbb::internal::allocate_additional_child_of_proxy::free(), tbb::internal::allocate_continuation_proxy::free(), tbb::internal::allocate_child_proxy::free(), and tbb::internal::auto_empty_task::~auto_empty_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_mailbox_task()

task * tbb::internal::generic_scheduler::get_mailbox_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempt to get a task from the mailbox.

Gets a task only if it has not been executed by its sender or a thief that has stolen it from the sender's task pool. Otherwise returns NULL.

This method is intended to be used only by the thread extracting the proxy from its mailbox. (In contrast to local task pool, mailbox can be read only by its owner).

Definition at line 1234 of file scheduler.cpp.

1234 {
1235 __TBB_ASSERT( my_affinity_id>0, "not in arena" );
1236 while ( task_proxy* const tp = my_inbox.pop( __TBB_ISOLATION_EXPR( isolation ) ) ) {
1237 if ( task* result = tp->extract_task<task_proxy::mailbox_bit>() ) {
1238 ITT_NOTIFY( sync_acquired, my_inbox.outbox() );
1239 result->prefix().extra_state |= es_task_is_stolen;
1240 return result;
1241 }
1242 // We have exclusive access to the proxy, and can destroy it.
1243 free_task<no_cache_small_task>(*tp);
1244 }
1245 return NULL;
1246}
@ es_task_is_stolen
Set if the task has been stolen.
static const intptr_t mailbox_bit
Definition: mailbox.h:31
task_proxy * pop(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get next piece of mail, or NULL if mailbox is empty.
Definition: mailbox.h:213

References __TBB_ASSERT, __TBB_ISOLATION_EXPR, tbb::internal::es_task_is_stolen, ITT_NOTIFY, tbb::internal::task_proxy::mailbox_bit, tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_inbox, and tbb::internal::mail_inbox::pop().

Here is the call graph for this function:

◆ get_task() [1/2]

task * tbb::internal::generic_scheduler::get_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )
inline

Get a task from the local pool.

Called only by the pool owner. Returns the pointer to the task or NULL if a suitable task is not found. Resets the pool if it is empty.

Definition at line 1012 of file scheduler.cpp.

1012 {
1014 // The current task position in the task pool.
1015 size_t T0 = __TBB_load_relaxed( my_arena_slot->tail );
1016 // The bounds of available tasks in the task pool. H0 is only used when the head bound is reached.
1017 size_t H0 = (size_t)-1, T = T0;
1018 task* result = NULL;
1019 bool task_pool_empty = false;
1020 __TBB_ISOLATION_EXPR( bool tasks_omitted = false );
1021 do {
1022 __TBB_ASSERT( !result, NULL );
1024 atomic_fence();
1025 if ( (intptr_t)__TBB_load_relaxed( my_arena_slot->head ) > (intptr_t)T ) {
1028 if ( (intptr_t)H0 > (intptr_t)T ) {
1029 // The thief has not backed off - nothing to grab.
1032 && H0 == T + 1, "victim/thief arbitration algorithm failure" );
1034 // No tasks in the task pool.
1035 task_pool_empty = true;
1036 break;
1037 } else if ( H0 == T ) {
1038 // There is only one task in the task pool.
1040 task_pool_empty = true;
1041 } else {
1042 // Release task pool if there are still some tasks.
1043 // After the release, the tail will be less than T, thus a thief
1044 // will not attempt to get a task at position T.
1046 }
1047 }
1048 __TBB_control_consistency_helper(); // on my_arena_slot->head
1049#if __TBB_TASK_ISOLATION
1050 result = get_task( T, isolation, tasks_omitted );
1051 if ( result ) {
1053 break;
1054 } else if ( !tasks_omitted ) {
1056 __TBB_ASSERT( T0 == T+1, NULL );
1057 T0 = T;
1058 }
1059#else
1060 result = get_task( T );
1061#endif /* __TBB_TASK_ISOLATION */
1062 } while ( !result && !task_pool_empty );
1063
1064#if __TBB_TASK_ISOLATION
1065 if ( tasks_omitted ) {
1066 if ( task_pool_empty ) {
1067 // All tasks have been checked. The task pool should be in reset state.
1068 // We just restore the bounds for the available tasks.
1069 // TODO: Does it have sense to move them to the beginning of the task pool?
1071 if ( result ) {
1072 // If we have a task, it should be at H0 position.
1073 __TBB_ASSERT( H0 == T, NULL );
1074 ++H0;
1075 }
1076 __TBB_ASSERT( H0 <= T0, NULL );
1077 if ( H0 < T0 ) {
1078 // Restore the task pool if there are some tasks.
1081 // The release fence is used in publish_task_pool.
1083 // Synchronize with snapshot as we published some tasks.
1085 }
1086 } else {
1087 // A task has been obtained. We need to make a hole in position T.
1089 __TBB_ASSERT( result, NULL );
1090 my_arena_slot->task_pool_ptr[T] = NULL;
1092 // Synchronize with snapshot as we published some tasks.
1093 // TODO: consider some approach not to call wakeup for each time. E.g. check if the tail reached the head.
1095 }
1096
1097 // Now it is safe to call note_affinity because the task pool is restored.
1098 if ( my_innermost_running_task == result ) {
1099 assert_task_valid( result );
1100 result->note_affinity( my_affinity_id );
1101 }
1102 }
1103#endif /* __TBB_TASK_ISOLATION */
1104 __TBB_ASSERT( (intptr_t)__TBB_load_relaxed( my_arena_slot->tail ) >= 0, NULL );
1105 __TBB_ASSERT( result || __TBB_ISOLATION_EXPR( tasks_omitted || ) is_quiescent_local_task_pool_reset(), NULL );
1106 return result;
1107} // generic_scheduler::get_task
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
void atomic_fence()
Sequentially consistent full memory fence.
Definition: tbb_machine.h:339
void assert_task_valid(const task *)
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:484
bool is_quiescent_local_task_pool_reset() const
Definition: scheduler.h:644
task * get_task(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get a task from the local pool.
Definition: scheduler.cpp:1012
void reset_task_pool_and_leave()
Resets head and tail indices to 0, and leaves task pool.
Definition: scheduler.h:702
void publish_task_pool()
Used by workers to enter the task pool.
Definition: scheduler.cpp:1248

References __TBB_ASSERT, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), tbb::internal::arena::advertise_new_work(), tbb::internal::assert_task_valid(), tbb::atomic_fence(), get_task(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::task::note_affinity(), tbb::internal::poison_pointer(), publish_task_pool(), release_task_pool(), reset_task_pool_and_leave(), tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::wakeup.

Referenced by get_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_task() [2/2]

task * tbb::internal::generic_scheduler::get_task ( size_t  T)
inline

Get a task from the local pool at specified location T.

Returns the pointer to the task or NULL if the task cannot be executed, e.g. proxy has been deallocated or isolation constraint is not met. tasks_omitted tells if some tasks have been omitted. Called only by the pool owner. The caller should guarantee that the position T is not available for a thief.

Definition at line 961 of file scheduler.cpp.

963{
965 || is_local_task_pool_quiescent(), "Is it safe to get a task at position T?" );
966
967 task* result = my_arena_slot->task_pool_ptr[T];
968 __TBB_ASSERT( !is_poisoned( result ), "The poisoned task is going to be processed" );
969#if __TBB_TASK_ISOLATION
970 if ( !result )
971 return NULL;
972
973 bool omit = isolation != no_isolation && isolation != result->prefix().isolation;
974 if ( !omit && !is_proxy( *result ) )
975 return result;
976 else if ( omit ) {
977 tasks_omitted = true;
978 return NULL;
979 }
980#else
982 if ( !result || !is_proxy( *result ) )
983 return result;
984#endif /* __TBB_TASK_ISOLATION */
985
986 task_proxy& tp = static_cast<task_proxy&>(*result);
987 if ( task *t = tp.extract_task<task_proxy::pool_bit>() ) {
988 GATHER_STATISTIC( ++my_counters.proxies_executed );
989 // Following assertion should be true because TBB 2.0 tasks never specify affinity, and hence are not proxied.
990 __TBB_ASSERT( is_version_3_task( *t ), "backwards compatibility with TBB 2.0 broken" );
991 my_innermost_running_task = t; // prepare for calling note_affinity()
992#if __TBB_TASK_ISOLATION
993 // Task affinity has changed. Postpone calling note_affinity because the task pool is in invalid state.
994 if ( !tasks_omitted )
995#endif /* __TBB_TASK_ISOLATION */
996 {
998 t->note_affinity( my_affinity_id );
999 }
1000 return t;
1001 }
1002
1003 // Proxy was empty, so it's our responsibility to free it
1004 free_task<small_task>( tp );
1005#if __TBB_TASK_ISOLATION
1006 if ( tasks_omitted )
1007 my_arena_slot->task_pool_ptr[T] = NULL;
1008#endif /* __TBB_TASK_ISOLATION */
1009 return NULL;
1010}
static const intptr_t pool_bit
Definition: mailbox.h:30
static bool is_version_3_task(task &t)
Definition: scheduler.h:146
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:348

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, tbb::internal::task_prefix::isolation, tbb::internal::no_isolation, tbb::internal::poison_pointer(), tbb::internal::task_proxy::pool_bit, and tbb::task::prefix().

Here is the call graph for this function:

◆ init_stack_info()

void tbb::internal::generic_scheduler::init_stack_info ( )

Sets up the data necessary for the stealing limiting heuristics.

Definition at line 158 of file scheduler.cpp.

158 {
159 // Stacks are growing top-down. Highest address is called "stack base",
160 // and the lowest is "stack limit".
161 __TBB_ASSERT( !my_stealing_threshold, "Stealing threshold has already been calculated" );
162 size_t stack_size = my_market->worker_stack_size();
163#if USE_WINTHREAD
164#if defined(_MSC_VER)&&_MSC_VER<1400 && !_WIN64
165 NT_TIB *pteb;
166 __asm mov eax, fs:[0x18]
167 __asm mov pteb, eax
168#else
169 NT_TIB *pteb = (NT_TIB*)NtCurrentTeb();
170#endif
171 __TBB_ASSERT( &pteb < pteb->StackBase && &pteb > pteb->StackLimit, "invalid stack info in TEB" );
172 __TBB_ASSERT( stack_size >0, "stack_size not initialized?" );
173 // When a thread is created with the attribute STACK_SIZE_PARAM_IS_A_RESERVATION, stack limit
174 // in the TIB points to the committed part of the stack only. This renders the expression
175 // "(uintptr_t)pteb->StackBase / 2 + (uintptr_t)pteb->StackLimit / 2" virtually useless.
176 // Thus for worker threads we use the explicit stack size we used while creating them.
177 // And for master threads we rely on the following fact and assumption:
178 // - the default stack size of a master thread on Windows is 1M;
179 // - if it was explicitly set by the application it is at least as large as the size of a worker stack.
180 if ( is_worker() || stack_size < MByte )
181 my_stealing_threshold = (uintptr_t)pteb->StackBase - stack_size / 2;
182 else
183 my_stealing_threshold = (uintptr_t)pteb->StackBase - MByte / 2;
184#else /* USE_PTHREAD */
185 // There is no portable way to get stack base address in Posix, so we use
186 // non-portable method (on all modern Linux) or the simplified approach
187 // based on the common sense assumptions. The most important assumption
188 // is that the main thread's stack size is not less than that of other threads.
189 // See also comment 3 at the end of this file
190 void *stack_base = &stack_size;
191#if __linux__ && !__bg__
192#if __TBB_ipf
193 void *rsb_base = __TBB_get_bsp();
194#endif
195 size_t np_stack_size = 0;
196 // Points to the lowest addressable byte of a stack.
197 void *stack_limit = NULL;
198
199#if __TBB_PREVIEW_RESUMABLE_TASKS
200 if ( !my_properties.genuine ) {
201 stack_limit = my_co_context.get_stack_limit();
202 __TBB_ASSERT( (uintptr_t)stack_base > (uintptr_t)stack_limit, "stack size must be positive" );
203 // Size of the stack free part
204 stack_size = size_t((char*)stack_base - (char*)stack_limit);
205 }
206#endif
207
208 pthread_attr_t np_attr_stack;
209 if( !stack_limit && 0 == pthread_getattr_np(pthread_self(), &np_attr_stack) ) {
210 if ( 0 == pthread_attr_getstack(&np_attr_stack, &stack_limit, &np_stack_size) ) {
211#if __TBB_ipf
212 pthread_attr_t attr_stack;
213 if ( 0 == pthread_attr_init(&attr_stack) ) {
214 if ( 0 == pthread_attr_getstacksize(&attr_stack, &stack_size) ) {
215 if ( np_stack_size < stack_size ) {
216 // We are in a secondary thread. Use reliable data.
217 // IA-64 architecture stack is split into RSE backup and memory parts
218 rsb_base = stack_limit;
219 stack_size = np_stack_size/2;
220 // Limit of the memory part of the stack
221 stack_limit = (char*)stack_limit + stack_size;
222 }
223 // We are either in the main thread or this thread stack
224 // is bigger that that of the main one. As we cannot discern
225 // these cases we fall back to the default (heuristic) values.
226 }
227 pthread_attr_destroy(&attr_stack);
228 }
229 // IA-64 architecture stack is split into RSE backup and memory parts
230 my_rsb_stealing_threshold = (uintptr_t)((char*)rsb_base + stack_size/2);
231#endif /* __TBB_ipf */
232 // TODO: pthread_attr_getstack cannot be used with Intel(R) Cilk(TM) Plus
233 // __TBB_ASSERT( (uintptr_t)stack_base > (uintptr_t)stack_limit, "stack size must be positive" );
234 // Size of the stack free part
235 stack_size = size_t((char*)stack_base - (char*)stack_limit);
236 }
237 pthread_attr_destroy(&np_attr_stack);
238 }
239#endif /* __linux__ */
240 __TBB_ASSERT( stack_size>0, "stack size must be positive" );
241 my_stealing_threshold = (uintptr_t)((char*)stack_base - stack_size/2);
242#endif /* USE_PTHREAD */
243}
const size_t MByte
Definition: tbb_misc.h:45
size_t worker_stack_size() const
Returns the requested stack size of worker threads.
Definition: market.h:314
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:673

References __TBB_ASSERT, __TBB_get_bsp(), is_worker(), tbb::internal::MByte, my_market, tbb::internal::scheduler_state::my_properties, my_stealing_threshold, and tbb::internal::market::worker_stack_size().

Here is the call graph for this function:

◆ is_local_task_pool_quiescent()

bool tbb::internal::generic_scheduler::is_local_task_pool_quiescent ( ) const
inline

Definition at line 633 of file scheduler.h.

633 {
636 return tp == EmptyTaskPool || tp == LockedTaskPool;
637}

References __TBB_ASSERT, EmptyTaskPool, LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line1::task_pool.

Referenced by commit_relocated_tasks(), is_quiescent_local_task_pool_empty(), and is_quiescent_local_task_pool_reset().

Here is the caller graph for this function:

◆ is_proxy()

static bool tbb::internal::generic_scheduler::is_proxy ( const task t)
inlinestatic

True if t is a task_proxy.

Definition at line 348 of file scheduler.h.

348 {
349 return t.prefix().extra_state==es_task_proxy;
350 }
@ es_task_proxy
Tag for v3 task_proxy.

References tbb::internal::es_task_proxy, tbb::internal::task_prefix::extra_state, and tbb::task::prefix().

Referenced by steal_task(), and steal_task_from().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_empty()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_empty ( ) const
inline

Definition at line 639 of file scheduler.h.

639 {
640 __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
642}

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line2::tail.

Referenced by leave_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_reset()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_reset ( ) const
inline

Definition at line 644 of file scheduler.h.

644 {
645 __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
647}

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line2::tail.

Referenced by get_task(), and prepare_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_task_pool_published()

bool tbb::internal::generic_scheduler::is_task_pool_published ( ) const
inline

Definition at line 628 of file scheduler.h.

628 {
631}

References __TBB_ASSERT, EmptyTaskPool, tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line1::task_pool.

Referenced by acquire_task_pool(), cleanup_master(), get_task(), leave_task_pool(), local_spawn(), prepare_task_pool(), and release_task_pool().

Here is the caller graph for this function:

◆ is_version_3_task()

static bool tbb::internal::generic_scheduler::is_version_3_task ( task t)
inlinestatic

Definition at line 146 of file scheduler.h.

146 {
147#if __TBB_PREVIEW_CRITICAL_TASKS
148 return (t.prefix().extra_state & 0x7)>=0x1;
149#else
150 return (t.prefix().extra_state & 0x0F)>=0x1;
151#endif
152 }

References tbb::internal::task_prefix::extra_state, and tbb::task::prefix().

Referenced by prepare_for_spawning(), and steal_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_worker()

bool tbb::internal::generic_scheduler::is_worker ( ) const
inline

True if running on a worker thread, false otherwise.

Definition at line 673 of file scheduler.h.

673 {
675}
bool type
Indicates that a scheduler acts as a master or a worker.
Definition: scheduler.h:54

References tbb::internal::scheduler_state::my_properties, tbb::internal::scheduler_properties::type, and tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::cleanup(), init_stack_info(), master_outermost_level(), nested_arena_entry(), nested_arena_exit(), and worker_outermost_level().

Here is the caller graph for this function:

◆ leave_task_pool()

void tbb::internal::generic_scheduler::leave_task_pool ( )
inline

Leave the task pool.

Leaving task pool automatically releases the task pool if it is locked.

Definition at line 1260 of file scheduler.cpp.

1260 {
1261 __TBB_ASSERT( is_task_pool_published(), "Not in arena" );
1262 // Do not reset my_arena_index. It will be used to (attempt to) re-acquire the slot next time
1263 __TBB_ASSERT( &my_arena->my_slots[my_arena_index] == my_arena_slot, "arena slot and slot index mismatch" );
1264 __TBB_ASSERT ( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when leaving arena" );
1265 __TBB_ASSERT ( is_quiescent_local_task_pool_empty(), "Cannot leave arena when the task pool is not empty" );
1267 // No release fence is necessary here as this assignment precludes external
1268 // accesses to the local task pool when becomes visible. Thus it is harmless
1269 // if it gets hoisted above preceding local bookkeeping manipulations.
1271}
bool is_quiescent_local_task_pool_empty() const
Definition: scheduler.h:639

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), EmptyTaskPool, is_quiescent_local_task_pool_empty(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by cleanup_master(), and reset_task_pool_and_leave().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn()

void tbb::internal::generic_scheduler::local_spawn ( task first,
task *&  next 
)

Conceptually, this method should be a member of class scheduler. But doing so would force us to publish class scheduler in the headers.

Definition at line 653 of file scheduler.cpp.

653 {
654 __TBB_ASSERT( first, NULL );
655 __TBB_ASSERT( governor::is_set(this), NULL );
656#if __TBB_TODO
657 // We need to consider capping the max task pool size and switching
658 // to in-place task execution whenever it is reached.
659#endif
660 if ( &first->prefix().next == &next ) {
661 // Single task is being spawned
662#if __TBB_TODO
663 // TODO:
664 // In the future we need to add overloaded spawn method for a single task,
665 // and a method accepting an array of task pointers (we may also want to
666 // change the implementation of the task_list class). But since such changes
667 // may affect the binary compatibility, we postpone them for a while.
668#endif
669#if __TBB_PREVIEW_CRITICAL_TASKS
670 if( !handled_as_critical( *first ) )
671#endif
672 {
673 size_t T = prepare_task_pool( 1 );
675 commit_spawned_tasks( T + 1 );
676 if ( !is_task_pool_published() )
678 }
679 }
680 else {
681 // Task list is being spawned
682#if __TBB_TODO
683 // TODO: add task_list::front() and implement&document the local execution ordering which is
684 // opposite to the current implementation. The idea is to remove hackish fast_reverse_vector
685 // and use push_back/push_front when accordingly LIFO and FIFO order of local execution is
686 // desired. It also requires refactoring of the reload_tasks method and my_offloaded_tasks list.
687 // Additional benefit may come from adding counter to the task_list so that it can reserve enough
688 // space in the task pool in advance and move all the tasks directly without any intermediate
689 // storages. But it requires dealing with backward compatibility issues and still supporting
690 // counter-less variant (though not necessarily fast implementation).
691#endif
693 fast_reverse_vector<task*> tasks(arr, min_task_pool_size);
694 task *t_next = NULL;
695 for( task* t = first; ; t = t_next ) {
696 // If t is affinitized to another thread, it may already be executed
697 // and destroyed by the time prepare_for_spawning returns.
698 // So milk it while it is alive.
699 bool end = &t->prefix().next == &next;
700 t_next = t->prefix().next;
701#if __TBB_PREVIEW_CRITICAL_TASKS
702 if( !handled_as_critical( *t ) )
703#endif
704 tasks.push_back( prepare_for_spawning(t) );
705 if( end )
706 break;
707 }
708 if( size_t num_tasks = tasks.size() ) {
709 size_t T = prepare_task_pool( num_tasks );
710 tasks.copy_memory( my_arena_slot->task_pool_ptr + T );
711 commit_spawned_tasks( T + num_tasks );
712 if ( !is_task_pool_published() )
714 }
715 }
718}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp end
auto first(Container &c) -> decltype(begin(c))
size_t prepare_task_pool(size_t n)
Makes sure that the task pool can accommodate at least n more elements.
Definition: scheduler.cpp:439
static const size_t min_task_pool_size
Definition: scheduler.h:369
void commit_spawned_tasks(size_t new_tail)
Makes newly spawned tasks visible to thieves.
Definition: scheduler.h:710
task * prepare_for_spawning(task *t)
Checks if t is affinitized to another thread, and if so, bundles it as proxy.
Definition: scheduler.cpp:595

References __TBB_ASSERT, tbb::internal::arena::advertise_new_work(), assert_task_pool_valid(), commit_spawned_tasks(), tbb::internal::fast_reverse_vector< T, max_segments >::copy_memory(), end, tbb::internal::first(), tbb::internal::governor::is_set(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::task_prefix::next, tbb::task::prefix(), prepare_for_spawning(), prepare_task_pool(), publish_task_pool(), tbb::internal::fast_reverse_vector< T, max_segments >::push_back(), tbb::internal::fast_reverse_vector< T, max_segments >::size(), tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::work_spawned.

Referenced by local_spawn_root_and_wait(), spawn(), and tbb::internal::custom_scheduler< SchedulerTraits >::tally_completion_of_predecessor().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn_root_and_wait()

void tbb::internal::generic_scheduler::local_spawn_root_and_wait ( task first,
task *&  next 
)

Definition at line 720 of file scheduler.cpp.

720 {
721 __TBB_ASSERT( governor::is_set(this), NULL );
722 __TBB_ASSERT( first, NULL );
723 auto_empty_task dummy( __TBB_CONTEXT_ARG(this, first->prefix().context) );
725 for( task* t=first; ; t=t->prefix().next ) {
726 ++n;
727 __TBB_ASSERT( !t->prefix().parent, "not a root task, or already running" );
728 t->prefix().parent = &dummy;
729 if( &t->prefix().next==&next ) break;
730#if __TBB_TASK_GROUP_CONTEXT
731 __TBB_ASSERT( t->prefix().context == t->prefix().next->prefix().context,
732 "all the root tasks in list must share the same context");
733#endif /* __TBB_TASK_GROUP_CONTEXT */
734 }
735 dummy.prefix().ref_count = n+1;
736 if( n>1 )
737 local_spawn( first->prefix().next, next );
738 local_wait_for_all( dummy, first );
739}
intptr_t reference_count
A reference count.
Definition: task.h:131
void local_spawn(task *first, task *&next)
Definition: scheduler.cpp:653

References __TBB_ASSERT, __TBB_CONTEXT_ARG, tbb::internal::task_prefix::context, tbb::internal::first(), tbb::internal::governor::is_set(), local_spawn(), local_wait_for_all(), tbb::internal::task_prefix::next, tbb::internal::task_prefix::parent, tbb::internal::auto_empty_task::prefix(), tbb::task::prefix(), and tbb::internal::task_prefix::ref_count.

Referenced by spawn_root_and_wait().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_wait_for_all()

virtual void tbb::internal::generic_scheduler::local_wait_for_all ( task parent,
task child 
)
pure virtual

Implemented in tbb::internal::custom_scheduler< SchedulerTraits >.

Referenced by cleanup_master(), local_spawn_root_and_wait(), and wait_until_empty().

Here is the caller graph for this function:

◆ lock_task_pool()

task ** tbb::internal::generic_scheduler::lock_task_pool ( arena_slot victim_arena_slot) const
inline

Locks victim's task pool, and returns pointer to it. The pointer can be NULL.

Garbles victim_arena_slot->task_pool for the duration of the lock.

ATTENTION: This method is mostly the same as generic_scheduler::acquire_task_pool(), with a little different logic of slot state checks (slot can be empty, locked or point to any task pool other than ours, and asynchronous transitions between all these states are possible). Thus if any of them is changed, consider changing the counterpart as well

Definition at line 537 of file scheduler.cpp.

537 {
538 task** victim_task_pool;
539 bool sync_prepare_done = false;
540 for( atomic_backoff backoff;; /*backoff pause embedded in the loop*/) {
541 victim_task_pool = victim_arena_slot->task_pool;
542 // NOTE: Do not use comparison of head and tail indices to check for
543 // the presence of work in the victim's task pool, as they may give
544 // incorrect indication because of task pool relocations and resizes.
545 if ( victim_task_pool == EmptyTaskPool ) {
546 // The victim thread emptied its task pool - nothing to lock
547 if( sync_prepare_done )
548 ITT_NOTIFY(sync_cancel, victim_arena_slot);
549 break;
550 }
551 if( victim_task_pool != LockedTaskPool &&
552 as_atomic(victim_arena_slot->task_pool).compare_and_swap(LockedTaskPool, victim_task_pool ) == victim_task_pool )
553 {
554 // We've locked victim's task pool
555 ITT_NOTIFY(sync_acquired, victim_arena_slot);
556 break;
557 }
558 else if( !sync_prepare_done ) {
559 // Start waiting
560 ITT_NOTIFY(sync_prepare, victim_arena_slot);
561 sync_prepare_done = true;
562 }
563 GATHER_STATISTIC( ++my_counters.thieves_conflicts );
564 // Someone else acquired a lock, so pause and do exponential backoff.
565#if __TBB_STEALING_ABORT_ON_CONTENTION
566 if(!backoff.bounded_pause()) {
567 // the 16 was acquired empirically and a theory behind it supposes
568 // that number of threads becomes much bigger than number of
569 // tasks which can be spawned by one thread causing excessive contention.
570 // TODO: However even small arenas can benefit from the abort on contention
571 // if preemption of a thief is a problem
572 if(my_arena->my_limit >= 16)
573 return EmptyTaskPool;
574 __TBB_Yield();
575 }
576#else
577 backoff.pause();
578#endif
579 }
580 __TBB_ASSERT( victim_task_pool == EmptyTaskPool ||
581 (victim_arena_slot->task_pool == LockedTaskPool && victim_task_pool != LockedTaskPool),
582 "not really locked victim's task pool?" );
583 return victim_task_pool;
584} // generic_scheduler::lock_task_pool
#define __TBB_Yield()
Definition: ibm_aix51.h:44
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p sync_cancel
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:161

References __TBB_ASSERT, __TBB_Yield, tbb::internal::as_atomic(), EmptyTaskPool, GATHER_STATISTIC, ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_limit, sync_cancel, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ master_outermost_level()

bool tbb::internal::generic_scheduler::master_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a master thread.

Returns true when this scheduler instance is associated with an application thread, and is not executing any TBB task. This includes being in a TBB dispatch loop (one of wait_for_all methods) invoked directly from that thread.

Definition at line 653 of file scheduler.h.

653 {
654 return !is_worker() && outermost_level();
655}
bool outermost_level() const
True if the scheduler is on the outermost dispatch level.
Definition: scheduler.h:649

References is_worker(), and outermost_level().

Here is the call graph for this function:

◆ max_threads_in_arena()

unsigned tbb::internal::generic_scheduler::max_threads_in_arena ( )
inline

Returns the concurrency limit of the current arena.

Definition at line 677 of file scheduler.h.

677 {
678 __TBB_ASSERT(my_arena, NULL);
679 return my_arena->my_num_slots;
680}
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250

References __TBB_ASSERT, tbb::internal::scheduler_state::my_arena, and tbb::internal::arena_base::my_num_slots.

Referenced by tbb::internal::get_initial_auto_partitioner_divisor(), and tbb::internal::affinity_partitioner_base_v3::resize().

Here is the caller graph for this function:

◆ nested_arena_entry()

void tbb::internal::generic_scheduler::nested_arena_entry ( arena a,
size_t  slot_index 
)

Definition at line 729 of file arena.cpp.

729 {
730 __TBB_ASSERT( is_alive(a->my_guard), NULL );
731 __TBB_ASSERT( a!=my_arena, NULL);
732
733 // overwrite arena settings
734#if __TBB_TASK_PRIORITY
735 if ( my_offloaded_tasks )
736 my_arena->orphan_offloaded_tasks( *this );
737 my_offloaded_tasks = NULL;
738#endif /* __TBB_TASK_PRIORITY */
739 attach_arena( a, slot_index, /*is_master*/true );
740 __TBB_ASSERT( my_arena == a, NULL );
742 // TODO? ITT_NOTIFY(sync_acquired, a->my_slots + index);
743 // TODO: it requires market to have P workers (not P-1)
744 // TODO: a preempted worker should be excluded from assignment to other arenas e.g. my_slack--
745 if( !is_worker() && slot_index >= my_arena->my_num_reserved_slots )
747#if __TBB_ARENA_OBSERVER
748 my_last_local_observer = 0; // TODO: try optimize number of calls
749 my_arena->my_observers.notify_entry_observers( my_last_local_observer, /*worker=*/false );
750#endif
751#if __TBB_PREVIEW_RESUMABLE_TASKS
752 my_wait_task = NULL;
753#endif
754}
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:253
market * my_market
The market that owns this arena.
Definition: arena.h:232
static void assume_scheduler(generic_scheduler *s)
Temporarily set TLS slot to the given scheduler.
Definition: governor.cpp:116
void adjust_demand(arena &, int delta)
Request that arena's need in workers should be adjusted.
Definition: market.cpp:557
void attach_arena(arena *, size_t index, bool is_master)
Definition: arena.cpp:80

References __TBB_ASSERT, tbb::internal::market::adjust_demand(), tbb::internal::governor::assume_scheduler(), attach_arena(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_market, and tbb::internal::arena_base::my_num_reserved_slots.

Here is the call graph for this function:

◆ nested_arena_exit()

void tbb::internal::generic_scheduler::nested_arena_exit ( )

Definition at line 756 of file arena.cpp.

756 {
757#if __TBB_ARENA_OBSERVER
758 my_arena->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
759#endif /* __TBB_ARENA_OBSERVER */
760#if __TBB_TASK_PRIORITY
761 if ( my_offloaded_tasks )
762 my_arena->orphan_offloaded_tasks( *this );
763#endif
766 // Free the master slot.
767 __TBB_ASSERT(my_arena->my_slots[my_arena_index].my_scheduler, "A slot is already empty");
769 my_arena->my_exit_monitors.notify_one(); // do not relax!
770}
concurrent_monitor my_exit_monitors
Waiting object for master threads that cannot join the arena.
Definition: arena.h:263
void notify_one()
Notify one thread about the event.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), tbb::internal::market::adjust_demand(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::arena_base::my_exit_monitors, tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, and tbb::internal::concurrent_monitor::notify_one().

Referenced by tbb::internal::nested_arena_context::~nested_arena_context().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ outermost_level()

bool tbb::internal::generic_scheduler::outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level.

Definition at line 649 of file scheduler.h.

649 {
651}

References tbb::internal::scheduler_state::my_properties, and tbb::internal::scheduler_properties::outermost.

Referenced by master_outermost_level(), and worker_outermost_level().

Here is the caller graph for this function:

◆ plugged_return_list()

static task * tbb::internal::generic_scheduler::plugged_return_list ( )
inlinestatic

Special value used to mark my_return_list as not taking any more entries.

Definition at line 458 of file scheduler.h.

458{return (task*)(intptr_t)(-1);}

Referenced by cleanup_scheduler(), and free_nonlocal_small_task().

Here is the caller graph for this function:

◆ prepare_for_spawning()

task * tbb::internal::generic_scheduler::prepare_for_spawning ( task t)
inline

Checks if t is affinitized to another thread, and if so, bundles it as proxy.

Returns either t or proxy containing t.

Definition at line 595 of file scheduler.cpp.

595 {
596 __TBB_ASSERT( t->state()==task::allocated, "attempt to spawn task that is not in 'allocated' state" );
597 t->prefix().state = task::ready;
598#if TBB_USE_ASSERT
599 if( task* parent = t->parent() ) {
600 internal::reference_count ref_count = parent->prefix().ref_count;
601 __TBB_ASSERT( ref_count>=0, "attempt to spawn task whose parent has a ref_count<0" );
602 __TBB_ASSERT( ref_count!=0, "attempt to spawn task whose parent has a ref_count==0 (forgot to set_ref_count?)" );
603 parent->prefix().extra_state |= es_ref_count_active;
604 }
605#endif /* TBB_USE_ASSERT */
606 affinity_id dst_thread = t->prefix().affinity;
607 __TBB_ASSERT( dst_thread == 0 || is_version_3_task(*t),
608 "backwards compatibility to TBB 2.0 tasks is broken" );
609#if __TBB_TASK_ISOLATION
611 t->prefix().isolation = isolation;
612#endif /* __TBB_TASK_ISOLATION */
613 if( dst_thread != 0 && dst_thread != my_affinity_id ) {
614 task_proxy& proxy = (task_proxy&)allocate_task( sizeof(task_proxy),
615 __TBB_CONTEXT_ARG(NULL, NULL) );
616 // Mark as a proxy
618 proxy.outbox = &my_arena->mailbox(dst_thread);
619 // Mark proxy as present in both locations (sender's task pool and destination mailbox)
620 proxy.task_and_tag = intptr_t(t) | task_proxy::location_mask;
621#if __TBB_TASK_PRIORITY
622 poison_pointer( proxy.prefix().context );
623#endif /* __TBB_TASK_PRIORITY */
624 __TBB_ISOLATION_EXPR( proxy.prefix().isolation = isolation );
625 ITT_NOTIFY( sync_releasing, proxy.outbox );
626 // Mail the proxy, if success, it may be destroyed by another thread at any moment after this point.
627 if ( proxy.outbox->push(&proxy) )
628 return &proxy;
629 // The mailbox is overfilled, deallocate the proxy and return the initial task.
630 free_task<small_task>(proxy);
631 }
632 return t;
633}
intptr_t isolation_tag
A tag for task isolation.
Definition: task.h:143
@ es_ref_count_active
Set if ref_count might be changed by another thread. Used for debugging.
unsigned char extra_state
Miscellaneous state that is not directly visible to users, stored as a byte for compactness.
Definition: task.h:292
isolation_tag isolation
The tag used for task isolation.
Definition: task.h:220
@ ready
task is in ready pool, or is going to be put there, or was just taken off.
Definition: task.h:641
static const intptr_t location_mask
Definition: mailbox.h:32

References __TBB_ASSERT, __TBB_CONTEXT_ARG, __TBB_ISOLATION_EXPR, tbb::internal::task_prefix::affinity, allocate_task(), tbb::task::allocated, tbb::internal::task_prefix::context, tbb::internal::es_ref_count_active, tbb::internal::es_task_proxy, tbb::internal::task_prefix::extra_state, is_version_3_task(), tbb::internal::task_prefix::isolation, ITT_NOTIFY, tbb::internal::task_proxy::location_mask, tbb::internal::arena::mailbox(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::task_proxy::outbox, tbb::task::parent(), parent, tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::push(), tbb::task::ready, tbb::internal::task_prefix::state, tbb::task::state(), sync_releasing, and tbb::internal::task_proxy::task_and_tag.

Referenced by local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ prepare_task_pool()

size_t tbb::internal::generic_scheduler::prepare_task_pool ( size_t  n)
inline

Makes sure that the task pool can accommodate at least n more elements.

If necessary relocates existing task pointers or grows the ready task deque. Returns (possible updated) tail index (not accounting for n).

Definition at line 439 of file scheduler.cpp.

439 {
440 size_t T = __TBB_load_relaxed(my_arena_slot->tail); // mirror
441 if ( T + num_tasks <= my_arena_slot->my_task_pool_size )
442 return T;
443
444 size_t new_size = num_tasks;
445
451 return 0;
452 }
453
455 size_t H = __TBB_load_relaxed( my_arena_slot->head ); // mirror
456 task** task_pool = my_arena_slot->task_pool_ptr;;
458 // Count not skipped tasks. Consider using std::count_if.
459 for ( size_t i = H; i < T; ++i )
460 if ( task_pool[i] ) ++new_size;
461 // If the free space at the beginning of the task pool is too short, we
462 // are likely facing a pathological single-producer-multiple-consumers
463 // scenario, and thus it's better to expand the task pool
465 if ( allocate ) {
466 // Grow task pool. As this operation is rare, and its cost is asymptotically
467 // amortizable, we can tolerate new task pool allocation done under the lock.
468 if ( new_size < 2 * my_arena_slot->my_task_pool_size )
470 my_arena_slot->allocate_task_pool( new_size ); // updates my_task_pool_size
471 }
472 // Filter out skipped tasks. Consider using std::copy_if.
473 size_t T1 = 0;
474 for ( size_t i = H; i < T; ++i )
475 if ( task_pool[i] )
476 my_arena_slot->task_pool_ptr[T1++] = task_pool[i];
477 // Deallocate the previous task pool if a new one has been allocated.
478 if ( allocate )
479 NFS_Free( task_pool );
480 else
482 // Publish the new state.
485 return T1;
486}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t new_size
void commit_relocated_tasks(size_t new_tail)
Makes relocated tasks visible to thieves and releases the local task pool.
Definition: scheduler.h:719
size_t my_task_pool_size
Capacity of the primary task pool (number of elements - pointers to task).
void allocate_task_pool(size_t n)
void fill_with_canary_pattern(size_t, size_t)

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), acquire_task_pool(), tbb::internal::arena_slot::allocate_task_pool(), assert_task_pool_valid(), commit_relocated_tasks(), tbb::internal::arena_slot::fill_with_canary_pattern(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena_slot_line2::my_task_pool_size, new_size, tbb::internal::NFS_Free(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ publish_task_pool()

void tbb::internal::generic_scheduler::publish_task_pool ( )
inline

Used by workers to enter the task pool.

Does not lock the task pool in case if arena slot has been successfully grabbed.

Definition at line 1248 of file scheduler.cpp.

1248 {
1249 __TBB_ASSERT ( my_arena, "no arena: initialization not completed?" );
1250 __TBB_ASSERT ( my_arena_index < my_arena->my_num_slots, "arena slot index is out-of-bound" );
1252 __TBB_ASSERT ( my_arena_slot->task_pool == EmptyTaskPool, "someone else grabbed my arena slot?" );
1254 "entering arena without tasks to share" );
1255 // Release signal on behalf of previously spawned tasks (when this thread was not in arena yet)
1258}

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), EmptyTaskPool, tbb::internal::arena_slot_line1::head, ITT_NOTIFY, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by get_task(), and local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ receive_or_steal_task()

virtual task * tbb::internal::generic_scheduler::receive_or_steal_task ( __TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation)  )
pure virtual

Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption).

Returns obtained task or NULL if all attempts fail.

Implemented in tbb::internal::custom_scheduler< SchedulerTraits >.

◆ release_task_pool()

void tbb::internal::generic_scheduler::release_task_pool ( ) const
inline

Unlocks the local task pool.

Restores my_arena_slot->task_pool munged by acquire_task_pool. Requires correctly set my_arena_slot->task_pool_ptr.

Definition at line 522 of file scheduler.cpp.

522 {
523 if ( !is_task_pool_published() )
524 return; // we are not in arena - nothing to unlock
525 __TBB_ASSERT( my_arena_slot, "we are not in arena" );
526 __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "arena slot is not locked" );
529}

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, sync_releasing, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), commit_relocated_tasks(), and get_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ reset_task_pool_and_leave()

void tbb::internal::generic_scheduler::reset_task_pool_and_leave ( )
inline

Resets head and tail indices to 0, and leaves task pool.

The task pool must be locked by the owner (via acquire_task_pool).

Definition at line 702 of file scheduler.h.

702 {
703 __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when resetting task pool" );
707}

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), tbb::internal::arena_slot_line1::head, leave_task_pool(), LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line1::task_pool.

Referenced by get_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ spawn()

void tbb::internal::generic_scheduler::spawn ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 741 of file scheduler.cpp.

741 {
743}

References tbb::internal::first(), tbb::internal::governor::local_scheduler(), and local_spawn().

Here is the call graph for this function:

◆ spawn_root_and_wait()

void tbb::internal::generic_scheduler::spawn_root_and_wait ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 745 of file scheduler.cpp.

745 {
747}
void local_spawn_root_and_wait(task *first, task *&next)
Definition: scheduler.cpp:720

References tbb::internal::first(), tbb::internal::governor::local_scheduler(), and local_spawn_root_and_wait().

Here is the call graph for this function:

◆ steal_task()

task * tbb::internal::generic_scheduler::steal_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempts to steal a task from a randomly chosen thread/scheduler.

Definition at line 1109 of file scheduler.cpp.

1109 {
1110 // Try to steal a task from a random victim.
1111 size_t k = my_random.get() % (my_arena->my_limit-1);
1112 arena_slot* victim = &my_arena->my_slots[k];
1113 // The following condition excludes the master that might have
1114 // already taken our previous place in the arena from the list .
1115 // of potential victims. But since such a situation can take
1116 // place only in case of significant oversubscription, keeping
1117 // the checks simple seems to be preferable to complicating the code.
1118 if( k >= my_arena_index )
1119 ++victim; // Adjusts random distribution to exclude self
1120 task **pool = victim->task_pool;
1121 task *t = NULL;
1122 if( pool == EmptyTaskPool || !(t = steal_task_from( __TBB_ISOLATION_ARG(*victim, isolation) )) )
1123 return NULL;
1124 if( is_proxy(*t) ) {
1125 task_proxy &tp = *(task_proxy*)t;
1126 t = tp.extract_task<task_proxy::pool_bit>();
1127 if ( !t ) {
1128 // Proxy was empty, so it's our responsibility to free it
1129 free_task<no_cache_small_task>(tp);
1130 return NULL;
1131 }
1132 GATHER_STATISTIC( ++my_counters.proxies_stolen );
1133 }
1134 t->prefix().extra_state |= es_task_is_stolen;
1135 if( is_version_3_task(*t) ) {
1137 t->prefix().owner = this;
1138 t->note_affinity( my_affinity_id );
1139 }
1140 GATHER_STATISTIC( ++my_counters.steals_committed );
1141 return t;
1142}
#define __TBB_ISOLATION_ARG(arg1, isolation)
scheduler * owner
Obsolete. The scheduler that owns the task.
Definition: task.h:247
task * steal_task_from(__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
Steal task from another scheduler's ready pool.
Definition: scheduler.cpp:1144
unsigned short get()
Get a random number.
Definition: tbb_misc.h:146

References __TBB_ISOLATION_ARG, EmptyTaskPool, tbb::internal::es_task_is_stolen, tbb::internal::task_prefix::extra_state, tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, tbb::internal::FastRandom::get(), is_proxy(), is_version_3_task(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::arena_base::my_limit, my_random, tbb::internal::arena::my_slots, tbb::task::note_affinity(), tbb::internal::task_prefix::owner, tbb::internal::task_proxy::pool_bit, tbb::task::prefix(), steal_task_from(), and tbb::internal::arena_slot_line1::task_pool.

Here is the call graph for this function:

◆ steal_task_from()

task * tbb::internal::generic_scheduler::steal_task_from ( __TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation)  )

Steal task from another scheduler's ready pool.

Definition at line 1144 of file scheduler.cpp.

1144 {
1145 task** victim_pool = lock_task_pool( &victim_slot );
1146 if ( !victim_pool )
1147 return NULL;
1148 task* result = NULL;
1149 size_t H = __TBB_load_relaxed(victim_slot.head); // mirror
1150 size_t H0 = H;
1151 bool tasks_omitted = false;
1152 do {
1153 __TBB_store_relaxed( victim_slot.head, ++H );
1154 atomic_fence();
1155 if ( (intptr_t)H > (intptr_t)__TBB_load_relaxed( victim_slot.tail ) ) {
1156 // Stealing attempt failed, deque contents has not been changed by us
1157 GATHER_STATISTIC( ++my_counters.thief_backoffs );
1158 __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1159 __TBB_ASSERT( !result, NULL );
1160 goto unlock;
1161 }
1162 __TBB_control_consistency_helper(); // on victim_slot.tail
1163 result = victim_pool[H-1];
1164 __TBB_ASSERT( !is_poisoned( result ), NULL );
1165
1166 if ( result ) {
1167 __TBB_ISOLATION_EXPR( if ( isolation == no_isolation || isolation == result->prefix().isolation ) )
1168 {
1169 if ( !is_proxy( *result ) )
1170 break;
1171 task_proxy& tp = *static_cast<task_proxy*>(result);
1172 // If mailed task is likely to be grabbed by its destination thread, skip it.
1173 if ( !(task_proxy::is_shared( tp.task_and_tag ) && tp.outbox->recipient_is_idle()) )
1174 break;
1175 GATHER_STATISTIC( ++my_counters.proxies_bypassed );
1176 }
1177 // The task cannot be executed either due to isolation or proxy constraints.
1178 result = NULL;
1179 tasks_omitted = true;
1180 } else if ( !tasks_omitted ) {
1181 // Cleanup the task pool from holes until a task is skipped.
1182 __TBB_ASSERT( H0 == H-1, NULL );
1183 poison_pointer( victim_pool[H0] );
1184 H0 = H;
1185 }
1186 } while ( !result );
1187 __TBB_ASSERT( result, NULL );
1188
1189 // emit "task was consumed" signal
1190 ITT_NOTIFY( sync_acquired, (void*)((uintptr_t)&victim_slot+sizeof( uintptr_t )) );
1191 poison_pointer( victim_pool[H-1] );
1192 if ( tasks_omitted ) {
1193 // Some proxies in the task pool have been omitted. Set the stolen task to NULL.
1194 victim_pool[H-1] = NULL;
1195 __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1196 }
1197unlock:
1198 unlock_task_pool( &victim_slot, victim_pool );
1199#if __TBB_PREFETCHING
1200 __TBB_cl_evict(&victim_slot.head);
1201 __TBB_cl_evict(&victim_slot.tail);
1202#endif
1203 if ( tasks_omitted )
1204 // Synchronize with snapshot as the head and tail can be bumped which can falsely trigger EMPTY state
1206 return result;
1207}
static bool is_shared(intptr_t tat)
True if the proxy is stored both in its sender's pool and in the destination mailbox.
Definition: mailbox.h:46
task ** lock_task_pool(arena_slot *victim_arena_slot) const
Locks victim's task pool, and returns pointer to it. The pointer can be NULL.
Definition: scheduler.cpp:537
void unlock_task_pool(arena_slot *victim_arena_slot, task **victim_task_pool) const
Unlocks victim's task pool.
Definition: scheduler.cpp:586

References __TBB_ASSERT, __TBB_cl_evict, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::arena::advertise_new_work(), tbb::atomic_fence(), GATHER_STATISTIC, tbb::internal::arena_slot_line1::head, is_proxy(), tbb::internal::task_proxy::is_shared(), tbb::internal::task_prefix::isolation, ITT_NOTIFY, lock_task_pool(), tbb::internal::scheduler_state::my_arena, tbb::internal::no_isolation, tbb::internal::task_proxy::outbox, tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::recipient_is_idle(), tbb::internal::arena_slot_line2::tail, tbb::internal::task_proxy::task_and_tag, unlock_task_pool(), and tbb::internal::arena::wakeup.

Referenced by steal_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ unlock_task_pool()

void tbb::internal::generic_scheduler::unlock_task_pool ( arena_slot victim_arena_slot,
task **  victim_task_pool 
) const
inline

Unlocks victim's task pool.

Restores victim_arena_slot->task_pool munged by lock_task_pool.

Definition at line 586 of file scheduler.cpp.

587 {
588 __TBB_ASSERT( victim_arena_slot, "empty victim arena slot pointer" );
589 __TBB_ASSERT( victim_arena_slot->task_pool == LockedTaskPool, "victim arena slot is not locked" );
590 ITT_NOTIFY(sync_releasing, victim_arena_slot);
591 __TBB_store_with_release( victim_arena_slot->task_pool, victim_task_pool );
592}

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, LockedTaskPool, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ wait_until_empty()

void tbb::internal::generic_scheduler::wait_until_empty ( )

Definition at line 772 of file arena.cpp.

772 {
773 my_dummy_task->prefix().ref_count++; // prevents exit from local_wait_for_all when local work is done enforcing the stealing
777}
tbb::atomic< uintptr_t > my_pool_state
Current task pool state and estimate of available tasks amount.
Definition: arena.h:195
static const pool_state_t SNAPSHOT_EMPTY
No tasks to steal since last snapshot was taken.
Definition: arena.h:318

References local_wait_for_all(), tbb::internal::scheduler_state::my_arena, my_dummy_task, tbb::internal::arena_base::my_pool_state, tbb::task::prefix(), tbb::internal::task_prefix::ref_count, and tbb::internal::arena::SNAPSHOT_EMPTY.

Here is the call graph for this function:

◆ worker_outermost_level()

bool tbb::internal::generic_scheduler::worker_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a worker thread.

Definition at line 657 of file scheduler.h.

657 {
658 return is_worker() && outermost_level();
659}

References is_worker(), and outermost_level().

Here is the call graph for this function:

Friends And Related Function Documentation

◆ custom_scheduler

template<typename SchedulerTraits >
friend class custom_scheduler
friend

Definition at line 389 of file scheduler.h.

Member Data Documentation

◆ min_task_pool_size

const size_t tbb::internal::generic_scheduler::min_task_pool_size = 64
static

Initial size of the task deque sufficient to serve without reallocation 4 nested parallel_for calls with iteration space of 65535 grains each.

Definition at line 369 of file scheduler.h.

Referenced by local_spawn(), and prepare_task_pool().

◆ my_auto_initialized

bool tbb::internal::generic_scheduler::my_auto_initialized

True if *this was created by automatic TBB initialization.

Definition at line 197 of file scheduler.h.

◆ my_dummy_task

task* tbb::internal::generic_scheduler::my_dummy_task

Fake root task created by slave threads.

The task is used as the "parent" argument to method wait_for_all.

Definition at line 186 of file scheduler.h.

Referenced by attach_arena(), cleanup_master(), cleanup_scheduler(), generic_scheduler(), tbb::internal::nested_arena_context::mimic_outermost_level(), wait_until_empty(), and tbb::internal::nested_arena_context::~nested_arena_context().

◆ my_free_list

task* tbb::internal::generic_scheduler::my_free_list

Free list of small tasks that can be reused.

Definition at line 178 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and free_task().

◆ my_market

market* tbb::internal::generic_scheduler::my_market

The market I am in.

Definition at line 172 of file scheduler.h.

Referenced by attach_arena(), cleanup_master(), cleanup_scheduler(), and init_stack_info().

◆ my_random

FastRandom tbb::internal::generic_scheduler::my_random

Random number generator used for picking a random victim from which to steal.

Definition at line 175 of file scheduler.h.

Referenced by steal_task(), and tbb::internal::custom_scheduler< SchedulerTraits >::tally_completion_of_predecessor().

◆ my_ref_count

long tbb::internal::generic_scheduler::my_ref_count

Reference count for scheduler.

Number of task_scheduler_init objects that point to this scheduler

Definition at line 190 of file scheduler.h.

◆ my_return_list

task* tbb::internal::generic_scheduler::my_return_list

List of small tasks that have been returned to this scheduler by other schedulers.

Definition at line 465 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and generic_scheduler().

◆ my_small_task_count

__TBB_atomic intptr_t tbb::internal::generic_scheduler::my_small_task_count

Number of small tasks that have been allocated by this scheduler.

Definition at line 461 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and destroy().

◆ my_stealing_threshold

uintptr_t tbb::internal::generic_scheduler::my_stealing_threshold

Position in the call stack specifying its maximal filling when stealing is still allowed.

Definition at line 155 of file scheduler.h.

Referenced by can_steal(), and init_stack_info().

◆ null_arena_index

const size_t tbb::internal::generic_scheduler::null_arena_index = ~size_t(0)
static

Definition at line 161 of file scheduler.h.

◆ quick_task_size

const size_t tbb::internal::generic_scheduler::quick_task_size = 256-task_prefix_reservation_size
static

If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd.

Definition at line 144 of file scheduler.h.

Referenced by allocate_task().


The documentation for this class was generated from the following files:

Copyright © 2005-2020 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.