RobWorkProject  23.9.11-
Public Member Functions | List of all members
thread_pool Class Reference

A fast, lightweight, and easy-to-use C++17 thread pool class. More...

#include <BS_thread_pool.hpp>

Public Member Functions

 thread_pool (const concurrency_t thread_count_=0)
 Construct a new thread pool. More...
 
 ~thread_pool ()
 Destruct the thread pool. Waits for all tasks to complete, then destroys all threads. Note that if the pool is paused, then any tasks still in the queue will never be executed.
 
size_t get_tasks_queued () const
 Get the number of tasks currently waiting in the queue to be executed by the threads. More...
 
size_t get_tasks_running () const
 Get the number of tasks currently being executed by the threads. More...
 
size_t get_tasks_total () const
 Get the total number of unfinished tasks: either still in the queue, or running in a thread. Note that get_tasks_total() == get_tasks_queued() + get_tasks_running(). More...
 
concurrency_t get_thread_count () const
 Get the number of threads in the pool. More...
 
bool is_paused () const
 Check whether the pool is currently paused. More...
 
template<typename F , typename T1 , typename T2 , typename T = std::common_type_t<T1, T2>, typename R = std::invoke_result_t<std::decay_t<F>, T, T>>
multi_future< R > parallelize_loop (const T1 first_index, const T2 index_after_last, F &&loop, const size_t num_blocks=0)
 Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Returns a multi_future object that contains the futures for all of the blocks. More...
 
template<typename F , typename T , typename R = std::invoke_result_t<std::decay_t<F>, T, T>>
multi_future< R > parallelize_loop (const T index_after_last, F &&loop, const size_t num_blocks=0)
 Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Returns a multi_future object that contains the futures for all of the blocks. This overload is used for the special case where the first index is 0. More...
 
void pause ()
 Pause the pool. The workers will temporarily stop retrieving new tasks out of the queue, although any tasks already executed will keep running until they are finished.
 
template<typename F , typename T1 , typename T2 , typename T = std::common_type_t<T1, T2>>
void push_loop (const T1 first_index, const T2 index_after_last, F &&loop, const size_t num_blocks=0)
 Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Does not return a multi_future, so the user must use wait_for_tasks() or some other method to ensure that the loop finishes executing, otherwise bad things will happen. More...
 
template<typename F , typename T >
void push_loop (const T index_after_last, F &&loop, const size_t num_blocks=0)
 Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Does not return a multi_future, so the user must use wait_for_tasks() or some other method to ensure that the loop finishes executing, otherwise bad things will happen. This overload is used for the special case where the first index is 0. More...
 
template<typename F , typename... A>
void push_task (F &&task, A &&... args)
 Push a function with zero or more arguments, but no return value, into the task queue. Does not return a future, so the user must use wait_for_tasks() or some other method to ensure that the task finishes executing, otherwise bad things will happen. More...
 
void reset (const concurrency_t thread_count_=0)
 Reset the number of threads in the pool. Waits for all currently running tasks to be completed, then destroys all threads in the pool and creates a new thread pool with the new number of threads. Any tasks that were waiting in the queue before the pool was reset will then be executed by the new threads. If the pool was paused before resetting it, the new pool will be paused as well. More...
 
template<typename F , typename... A, typename R = std::invoke_result_t<std::decay_t<F>, std::decay_t<A>...>>
std::future< R > submit (F &&task, A &&... args)
 Submit a function with zero or more arguments into the task queue. If the function has a return value, get a future for the eventual returned value. If the function has no return value, get an std::future<void> which can be used to wait until the task finishes. More...
 
void unpause ()
 Unpause the pool. The workers will resume retrieving new tasks out of the queue.
 
void wait_for_tasks ()
 Wait for tasks to be completed. Normally, this function waits for all tasks, both those that are currently running in the threads and those that are still waiting in the queue. However, if the pool is paused, this function only waits for the currently running tasks (otherwise it would wait forever). Note: To wait for just one specific task, use submit() instead, and call the wait() member function of the generated future.
 
template<typename R , typename P >
bool wait_for_tasks_duration (const std::chrono::duration< R, P > &duration)
 Wait for tasks to be completed, but stop waiting after the specified duration has passed. More...
 
template<typename C , typename D >
bool wait_for_tasks_until (const std::chrono::time_point< C, D > &timeout_time)
 Wait for tasks to be completed, but stop waiting after the specified time point has been reached. More...
 

Detailed Description

A fast, lightweight, and easy-to-use C++17 thread pool class.

Constructor & Destructor Documentation

◆ thread_pool()

thread_pool ( const concurrency_t  thread_count_ = 0)
inline

Construct a new thread pool.

Parameters
thread_count_The number of threads to use. The default value is the total number of hardware threads available, as reported by the implementation. This is usually determined by the number of cores in the CPU. If a core is hyperthreaded, it will count as two threads.

Member Function Documentation

◆ get_tasks_queued()

size_t get_tasks_queued ( ) const
inline

Get the number of tasks currently waiting in the queue to be executed by the threads.

Returns
The number of queued tasks.

◆ get_tasks_running()

size_t get_tasks_running ( ) const
inline

Get the number of tasks currently being executed by the threads.

Returns
The number of running tasks.

◆ get_tasks_total()

size_t get_tasks_total ( ) const
inline

Get the total number of unfinished tasks: either still in the queue, or running in a thread. Note that get_tasks_total() == get_tasks_queued() + get_tasks_running().

Returns
The total number of tasks.

◆ get_thread_count()

concurrency_t get_thread_count ( ) const
inline

Get the number of threads in the pool.

Returns
The number of threads.

◆ is_paused()

bool is_paused ( ) const
inline

Check whether the pool is currently paused.

Returns
true if the pool is paused, false if it is not paused.

◆ parallelize_loop() [1/2]

multi_future<R> parallelize_loop ( const T  index_after_last,
F &&  loop,
const size_t  num_blocks = 0 
)
inline

Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Returns a multi_future object that contains the futures for all of the blocks. This overload is used for the special case where the first index is 0.

Template Parameters
FThe type of the function to loop through.
TThe type of the loop indices. Should be a signed or unsigned integer.
RThe return value of the loop function F (can be void).
Parameters
index_after_lastThe index after the last index in the loop. The loop will iterate from 0 to (index_after_last - 1) inclusive. In other words, it will be equivalent to "for (T i = 0; i < index_after_last; ++i)". Note that if index_after_last == 0, no blocks will be submitted.
loopThe function to loop through. Will be called once per block. Should take exactly two arguments: the first index in the block and the index after the last index in the block. loop(start, end) should typically involve a loop of the form "for (T i = start; i < end; ++i)".
num_blocksThe maximum number of blocks to split the loop into. The default is to use the number of threads in the pool.
Returns
A multi_future object that can be used to wait for all the blocks to finish. If the loop function returns a value, the multi_future object can also be used to obtain the values returned by each block.

◆ parallelize_loop() [2/2]

multi_future<R> parallelize_loop ( const T1  first_index,
const T2  index_after_last,
F &&  loop,
const size_t  num_blocks = 0 
)
inline

Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Returns a multi_future object that contains the futures for all of the blocks.

Template Parameters
FThe type of the function to loop through.
T1The type of the first index in the loop. Should be a signed or unsigned integer.
T2The type of the index after the last index in the loop. Should be a signed or unsigned integer. If T1 is not the same as T2, a common type will be automatically inferred.
TThe common type of T1 and T2.
RThe return value of the loop function F (can be void).
Parameters
first_indexThe first index in the loop.
index_after_lastThe index after the last index in the loop. The loop will iterate from first_index to (index_after_last - 1) inclusive. In other words, it will be equivalent to "for (T i = first_index; i < index_after_last; ++i)". Note that if index_after_last == first_index, no blocks will be submitted.
loopThe function to loop through. Will be called once per block. Should take exactly two arguments: the first index in the block and the index after the last index in the block. loop(start, end) should typically involve a loop of the form "for (T i = start; i < end; ++i)".
num_blocksThe maximum number of blocks to split the loop into. The default is to use the number of threads in the pool.
Returns
A multi_future object that can be used to wait for all the blocks to finish. If the loop function returns a value, the multi_future object can also be used to obtain the values returned by each block.

◆ push_loop() [1/2]

void push_loop ( const T  index_after_last,
F &&  loop,
const size_t  num_blocks = 0 
)
inline

Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Does not return a multi_future, so the user must use wait_for_tasks() or some other method to ensure that the loop finishes executing, otherwise bad things will happen. This overload is used for the special case where the first index is 0.

Template Parameters
FThe type of the function to loop through.
TThe type of the loop indices. Should be a signed or unsigned integer.
Parameters
index_after_lastThe index after the last index in the loop. The loop will iterate from 0 to (index_after_last - 1) inclusive. In other words, it will be equivalent to "for (T i = 0; i < index_after_last; ++i)". Note that if index_after_last == 0, no blocks will be submitted.
loopThe function to loop through. Will be called once per block. Should take exactly two arguments: the first index in the block and the index after the last index in the block. loop(start, end) should typically involve a loop of the form "for (T i = start; i < end; ++i)".
num_blocksThe maximum number of blocks to split the loop into. The default is to use the number of threads in the pool.

◆ push_loop() [2/2]

void push_loop ( const T1  first_index,
const T2  index_after_last,
F &&  loop,
const size_t  num_blocks = 0 
)
inline

Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. Does not return a multi_future, so the user must use wait_for_tasks() or some other method to ensure that the loop finishes executing, otherwise bad things will happen.

Template Parameters
FThe type of the function to loop through.
T1The type of the first index in the loop. Should be a signed or unsigned integer.
T2The type of the index after the last index in the loop. Should be a signed or unsigned integer. If T1 is not the same as T2, a common type will be automatically inferred.
TThe common type of T1 and T2.
Parameters
first_indexThe first index in the loop.
index_after_lastThe index after the last index in the loop. The loop will iterate from first_index to (index_after_last - 1) inclusive. In other words, it will be equivalent to "for (T i = first_index; i < index_after_last; ++i)". Note that if index_after_last == first_index, no blocks will be submitted.
loopThe function to loop through. Will be called once per block. Should take exactly two arguments: the first index in the block and the index after the last index in the block. loop(start, end) should typically involve a loop of the form "for (T i = start; i < end; ++i)".
num_blocksThe maximum number of blocks to split the loop into. The default is to use the number of threads in the pool.

◆ push_task()

void push_task ( F &&  task,
A &&...  args 
)
inline

Push a function with zero or more arguments, but no return value, into the task queue. Does not return a future, so the user must use wait_for_tasks() or some other method to ensure that the task finishes executing, otherwise bad things will happen.

Template Parameters
FThe type of the function.
AThe types of the arguments.
Parameters
taskThe function to push.
argsThe zero or more arguments to pass to the function. Note that if the task is a class member function, the first argument must be a pointer to the object, i.e. &object (or this), followed by the actual arguments.

◆ reset()

void reset ( const concurrency_t  thread_count_ = 0)
inline

Reset the number of threads in the pool. Waits for all currently running tasks to be completed, then destroys all threads in the pool and creates a new thread pool with the new number of threads. Any tasks that were waiting in the queue before the pool was reset will then be executed by the new threads. If the pool was paused before resetting it, the new pool will be paused as well.

Parameters
thread_count_The number of threads to use. The default value is the total number of hardware threads available, as reported by the implementation. This is usually determined by the number of cores in the CPU. If a core is hyperthreaded, it will count as two threads.

◆ submit()

std::future<R> submit ( F &&  task,
A &&...  args 
)
inline

Submit a function with zero or more arguments into the task queue. If the function has a return value, get a future for the eventual returned value. If the function has no return value, get an std::future<void> which can be used to wait until the task finishes.

Template Parameters
FThe type of the function.
AThe types of the zero or more arguments to pass to the function.
RThe return type of the function (can be void).
Parameters
taskThe function to submit.
argsThe zero or more arguments to pass to the function. Note that if the task is a class member function, the first argument must be a pointer to the object, i.e. &object (or this), followed by the actual arguments.
Returns
A future to be used later to wait for the function to finish executing and/or obtain its returned value if it has one.

◆ wait_for_tasks_duration()

bool wait_for_tasks_duration ( const std::chrono::duration< R, P > &  duration)
inline

Wait for tasks to be completed, but stop waiting after the specified duration has passed.

Template Parameters
RAn arithmetic type representing the number of ticks to wait.
PAn std::ratio representing the length of each tick in seconds.
Parameters
durationThe time duration to wait.
Returns
true if finished waiting before the duration expired, false if timed out or the pool is already waiting. In other words, returns false if and only if tasks are still running.

◆ wait_for_tasks_until()

bool wait_for_tasks_until ( const std::chrono::time_point< C, D > &  timeout_time)
inline

Wait for tasks to be completed, but stop waiting after the specified time point has been reached.

Template Parameters
CThe type of the clock used to measure time.
DAn std::chrono::duration type used to indicate the time point.
Parameters
timeout_timeThe time point at which to stop waiting.
Returns
true if finished waiting before the time point was reached, false if timed out or the pool is already waiting. In other words, returns false if and only if tasks are still running.

The documentation for this class was generated from the following file: