Struct ThreadPool

struct ThreadPool { ... }

Represents a user created thread-pool.

Use a ThreadPoolBuilder to specify the number and/or names of threads in the pool. After calling ThreadPoolBuilder::build(), you can then execute functions explicitly within this ThreadPool using ThreadPool::install(). By contrast, top level rayon functions (like join()) will execute implicitly within the current thread-pool.

Creating a ThreadPool

# use rayon_core as rayon;
let pool = rayon::ThreadPoolBuilder::new().num_threads(8).build().unwrap();

install() executes a closure in one of the ThreadPool's threads. In addition, any other rayon operations called inside of install() will also execute in the context of the ThreadPool.

When the ThreadPool is dropped, that's a signal for the threads it manages to terminate, they will complete executing any remaining work that you have spawned, and automatically terminate.

Implementations

impl ThreadPool

fn new(configuration: Configuration) -> Result<ThreadPool, Box<dyn Error>>

Deprecated in favor of ThreadPoolBuilder::build.

fn install<OP, R>(self: &Self, op: OP) -> R
where
    OP: FnOnce() -> R + Send,
    R: Send

Executes op within the threadpool. Any attempts to use join, scope, or parallel iterators will then operate within that threadpool.

Warning: thread-local data

Because op is executing within the Rayon thread-pool, thread-local data from the current thread will not be accessible.

Warning: execution order

If the current thread is part of a different thread pool, it will try to keep busy while the op completes in its target pool, similar to calling [ThreadPool::yield_now()] in a loop. Therefore, it may potentially schedule other tasks to run on the current thread in the meantime. For example

# use rayon_core as rayon;
fn main() {
    rayon::ThreadPoolBuilder::new().num_threads(1).build_global().unwrap();
    let pool = rayon_core::ThreadPoolBuilder::default().build().unwrap();
    let do_it = || {
        print!("one ");
        pool.install(||{});
        print!("two ");
    };
    rayon::join(|| do_it(), || do_it());
}

Since we configured just one thread in the global pool, one might expect do_it() to run sequentially, producing:

one two one two

However each call to install() yields implicitly, allowing rayon to run multiple instances of do_it() concurrently on the single, global thread. The following output would be equally valid:

one one two two

Panics

If op should panic, that panic will be propagated.

Using install()

   # use rayon_core as rayon;
   fn main() {
        let pool = rayon::ThreadPoolBuilder::new().num_threads(8).build().unwrap();
        let n = pool.install(|| fib(20));
        println!("{}", n);
   }

   fn fib(n: usize) -> usize {
        if n == 0 || n == 1 {
            return n;
        }
        let (a, b) = rayon::join(|| fib(n - 1), || fib(n - 2)); // runs inside of `pool`
        return a + b;
    }
fn broadcast<OP, R>(self: &Self, op: OP) -> Vec<R>
where
    OP: Fn(BroadcastContext<'_>) -> R + Sync,
    R: Send

Executes op within every thread in the threadpool. Any attempts to use join, scope, or parallel iterators will then operate within that threadpool.

Broadcasts are executed on each thread after they have exhausted their local work queue, before they attempt work-stealing from other threads. The goal of that strategy is to run everywhere in a timely manner without being too disruptive to current work. There may be alternative broadcast styles added in the future for more or less aggressive injection, if the need arises.

Warning: thread-local data

Because op is executing within the Rayon thread-pool, thread-local data from the current thread will not be accessible.

Panics

If op should panic on one or more threads, exactly one panic will be propagated, only after all threads have completed (or panicked) their own op.

Examples

   # use rayon_core as rayon;
   use std::sync::atomic::{AtomicUsize, Ordering};

   fn main() {
        let pool = rayon::ThreadPoolBuilder::new().num_threads(5).build().unwrap();

        // The argument gives context, including the index of each thread.
        let v: Vec<usize> = pool.broadcast(|ctx| ctx.index() * ctx.index());
        assert_eq!(v, &[0, 1, 4, 9, 16]);

        // The closure can reference the local stack
        let count = AtomicUsize::new(0);
        pool.broadcast(|_| count.fetch_add(1, Ordering::Relaxed));
        assert_eq!(count.into_inner(), 5);
   }
fn current_num_threads(self: &Self) -> usize

Returns the (current) number of threads in the thread pool.

Future compatibility note

Note that unless this thread-pool was created with a ThreadPoolBuilder that specifies the number of threads, then this number may vary over time in future versions (see the num_threads() method for details).

fn current_thread_index(self: &Self) -> Option<usize>

If called from a Rayon worker thread in this thread-pool, returns the index of that thread; if not called from a Rayon thread, or called from a Rayon thread that belongs to a different thread-pool, returns None.

The index for a given thread will not change over the thread's lifetime. However, multiple threads may share the same index if they are in distinct thread-pools.

Future compatibility note

Currently, every thread-pool (including the global thread-pool) has a fixed number of threads, but this may change in future Rayon versions (see the num_threads() method for details). In that case, the index for a thread would not change during its lifetime, but thread indices may wind up being reused if threads are terminated and restarted.

fn current_thread_has_pending_tasks(self: &Self) -> Option<bool>

Returns true if the current worker thread currently has "local tasks" pending. This can be useful as part of a heuristic for deciding whether to spawn a new task or execute code on the current thread, particularly in breadth-first schedulers. However, keep in mind that this is an inherently racy check, as other worker threads may be actively "stealing" tasks from our local deque.

Background: Rayon's uses a work-stealing scheduler. The key idea is that each thread has its own deque of tasks. Whenever a new task is spawned -- whether through join(), Scope::spawn(), or some other means -- that new task is pushed onto the thread's local deque. Worker threads have a preference for executing their own tasks; if however they run out of tasks, they will go try to "steal" tasks from other threads. This function therefore has an inherent race with other active worker threads, which may be removing items from the local deque.

fn join<A, B, RA, RB>(self: &Self, oper_a: A, oper_b: B) -> (RA, RB)
where
    A: FnOnce() -> RA + Send,
    B: FnOnce() -> RB + Send,
    RA: Send,
    RB: Send

Execute oper_a and oper_b in the thread-pool and return the results. Equivalent to self.install(|| join(oper_a, oper_b)).

fn scope<'scope, OP, R>(self: &Self, op: OP) -> R
where
    OP: FnOnce(&Scope<'scope>) -> R + Send,
    R: Send

Creates a scope that executes within this thread-pool. Equivalent to self.install(|| scope(...)).

See also: the scope() function.

fn scope_fifo<'scope, OP, R>(self: &Self, op: OP) -> R
where
    OP: FnOnce(&ScopeFifo<'scope>) -> R + Send,
    R: Send

Creates a scope that executes within this thread-pool. Spawns from the same thread are prioritized in relative FIFO order. Equivalent to self.install(|| scope_fifo(...)).

See also: the scope_fifo() function.

fn in_place_scope<'scope, OP, R>(self: &Self, op: OP) -> R
where
    OP: FnOnce(&Scope<'scope>) -> R

Creates a scope that spawns work into this thread-pool.

See also: the in_place_scope() function.

fn in_place_scope_fifo<'scope, OP, R>(self: &Self, op: OP) -> R
where
    OP: FnOnce(&ScopeFifo<'scope>) -> R

Creates a scope that spawns work into this thread-pool in FIFO order.

See also: the in_place_scope_fifo() function.

fn spawn<OP>(self: &Self, op: OP)
where
    OP: FnOnce() + Send + 'static

Spawns an asynchronous task in this thread-pool. This task will run in the implicit, global scope, which means that it may outlast the current stack frame -- therefore, it cannot capture any references onto the stack (you will likely need a move closure).

See also: the spawn() function defined on scopes.

fn spawn_fifo<OP>(self: &Self, op: OP)
where
    OP: FnOnce() + Send + 'static

Spawns an asynchronous task in this thread-pool. This task will run in the implicit, global scope, which means that it may outlast the current stack frame -- therefore, it cannot capture any references onto the stack (you will likely need a move closure).

See also: the spawn_fifo() function defined on scopes.

fn spawn_broadcast<OP>(self: &Self, op: OP)
where
    OP: Fn(BroadcastContext<'_>) + Send + Sync + 'static

Spawns an asynchronous task on every thread in this thread-pool. This task will run in the implicit, global scope, which means that it may outlast the current stack frame -- therefore, it cannot capture any references onto the stack (you will likely need a move closure).

fn yield_now(self: &Self) -> Option<Yield>

Cooperatively yields execution to Rayon.

This is similar to the general [yield_now()], but only if the current thread is part of this thread pool.

Returns Some(Yield::Executed) if anything was executed, Some(Yield::Idle) if nothing was available, or None if the current thread is not part this pool.

fn yield_local(self: &Self) -> Option<Yield>

Cooperatively yields execution to local Rayon work.

This is similar to the general [yield_local()], but only if the current thread is part of this thread pool.

Returns Some(Yield::Executed) if anything was executed, Some(Yield::Idle) if nothing was available, or None if the current thread is not part this pool.

impl Debug for ThreadPool

fn fmt(self: &Self, fmt: &mut Formatter<'_>) -> Result

impl Drop for ThreadPool

fn drop(self: &mut Self)

impl Freeze for ThreadPool

impl RefUnwindSafe for ThreadPool

impl Send for ThreadPool

impl Sync for ThreadPool

impl Unpin for ThreadPool

impl UnsafeUnpin for ThreadPool

impl UnwindSafe for ThreadPool

impl<T> Any for ThreadPool

fn type_id(self: &Self) -> TypeId

impl<T> Borrow for ThreadPool

fn borrow(self: &Self) -> &T

impl<T> BorrowMut for ThreadPool

fn borrow_mut(self: &mut Self) -> &mut T

impl<T> From for ThreadPool

fn from(t: T) -> T

Returns the argument unchanged.

impl<T> Pointable for ThreadPool

unsafe fn init(init: <T as Pointable>::Init) -> usize
unsafe fn deref<'a>(ptr: usize) -> &'a T
unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T
unsafe fn drop(ptr: usize)

impl<T, U> Into for ThreadPool

fn into(self: Self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of [From]<T> for U chooses to do.

impl<T, U> TryFrom for ThreadPool

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

impl<T, U> TryInto for ThreadPool

fn try_into(self: Self) -> Result<U, <U as TryFrom<T>>::Error>