Trait ParallelIterator
trait ParallelIterator: Sized + Send
Parallel version of the standard iterator trait.
The combinators on this trait are available on all parallel
iterators. Additional methods can be found on the
IndexedParallelIterator trait: those methods are only
available for parallel iterators where the number of items is
known in advance (so, e.g., after invoking filter, those methods
become unavailable).
For examples of using parallel iterators, see the docs on the
iter module.
Associated Types
type Item: TraitBound { trait_: Path { path: "Send", id: Id(6), args: None }, generic_params: [], modifier: None }The type of item that this parallel iterator produces. For example, if you use the
for_eachmethod, this is the type of item that your closure will be invoked with.
Required Methods
fn drive_unindexed<C>(self: Self, consumer: C) -> <C as >::Result where C: UnindexedConsumer<<Self as >::Item>Internal method used to define the behavior of this parallel iterator. You should not need to call this directly.
This method causes the iterator
selfto start producing items and to feed them to the consumerconsumerone by one. It may split the consumer before doing so to create the opportunity to produce in parallel.See the README for more details on the internals of parallel iterators.
Provided Methods
fn for_each<OP>(self: Self, op: OP) where OP: Fn(<Self as >::Item) + Sync + SendExecutes
OPon each item produced by the iterator, in parallel.Examples
use *; .into_par_iter.for_each;fn for_each_with<OP, T>(self: Self, init: T, op: OP) where OP: Fn(&mut T, <Self as >::Item) + Sync + Send, T: Send + CloneExecutes
OPon the giveninitvalue with each item produced by the iterator, in parallel.The
initvalue will be cloned only as needed to be paired with the group of items in each rayon job. It does not require the type to beSync.Examples
use channel; use *; let = channel; .into_par_iter.for_each_with; let mut res: = receiver.iter.collect; res.sort; assert_eq!fn for_each_init<OP, INIT, T>(self: Self, init: INIT, op: OP) where OP: Fn(&mut T, <Self as >::Item) + Sync + Send, INIT: Fn() -> T + Sync + SendExecutes
OPon a value returned byinitwith each item produced by the iterator, in parallel.The
initfunction will be called only as needed for a value to be paired with the group of items in each rayon job. There is no constraint on that returned type at all!Examples
use Rng; use *; let mut v = vec!; v.par_chunks_mut .for_each_init; // There's a remote chance that this will fail... for i in 0u8..=255fn try_for_each<OP, R>(self: Self, op: OP) -> R where OP: Fn(<Self as >::Item) -> R + Sync + Send, R: Try<Output = ()> + SendExecutes a fallible
OPon each item produced by the iterator, in parallel.If the
OPreturnsResult::ErrorOption::None, we will attempt to stop processing the rest of the items in the iterator as soon as possible, and we will return that terminating value. Otherwise, we will return an emptyResult::Ok(())orOption::Some(()). If there are multiple errors in parallel, it is not specified which will be returned.Examples
use *; use ; // This will stop iteration early if there's any write error, like // having piped output get closed on the other end. .into_par_iter .try_for_each .expect;fn try_for_each_with<OP, T, R>(self: Self, init: T, op: OP) -> R where OP: Fn(&mut T, <Self as >::Item) -> R + Sync + Send, T: Send + Clone, R: Try<Output = ()> + SendExecutes a fallible
OPon the giveninitvalue with each item produced by the iterator, in parallel.This combines the
initsemantics offor_each_with()and the failure semantics oftry_for_each().Examples
use channel; use *; let = channel; .into_par_iter .try_for_each_with .expect; let mut res: = receiver.iter.collect; res.sort; assert_eq!fn try_for_each_init<OP, INIT, T, R>(self: Self, init: INIT, op: OP) -> R where OP: Fn(&mut T, <Self as >::Item) -> R + Sync + Send, INIT: Fn() -> T + Sync + Send, R: Try<Output = ()> + SendExecutes a fallible
OPon a value returned byinitwith each item produced by the iterator, in parallel.This combines the
initsemantics offor_each_init()and the failure semantics oftry_for_each().Examples
use Rng; use *; let mut v = vec!; v.par_chunks_mut .try_for_each_init .expect; // There's a remote chance that this will fail... for i in 0u8..=255fn count(self: Self) -> usizeCounts the number of items in this parallel iterator.
Examples
use *; let count = .into_par_iter.count; assert_eq!;fn map<F, R>(self: Self, map_op: F) -> Map<Self, F> where F: Fn(<Self as >::Item) -> R + Sync + Send, R: SendApplies
map_opto each item of this iterator, producing a new iterator with the results.Examples
use *; let mut par_iter = .into_par_iter.map; let doubles: = par_iter.collect; assert_eq!;fn map_with<F, T, R>(self: Self, init: T, map_op: F) -> MapWith<Self, T, F> where F: Fn(&mut T, <Self as >::Item) -> R + Sync + Send, T: Send + Clone, R: SendApplies
map_opto the giveninitvalue with each item of this iterator, producing a new iterator with the results.The
initvalue will be cloned only as needed to be paired with the group of items in each rayon job. It does not require the type to beSync.Examples
use channel; use *; let = channel; let a: = .into_par_iter // iterating over i32 .map_with .collect; // collecting the returned values into a vector let mut b: = receiver.iter // iterating over the values in the channel .collect; // and collecting them b.sort; assert_eq!;fn map_init<F, INIT, T, R>(self: Self, init: INIT, map_op: F) -> MapInit<Self, INIT, F> where F: Fn(&mut T, <Self as >::Item) -> R + Sync + Send, INIT: Fn() -> T + Sync + Send, R: SendApplies
map_opto a value returned byinitwith each item of this iterator, producing a new iterator with the results.The
initfunction will be called only as needed for a value to be paired with the group of items in each rayon job. There is no constraint on that returned type at all!Examples
use Rng; use *; let a: = .into_par_iter .map_init.collect; // There's a remote chance that this will fail... assert!; assert!;fn cloned<'a, T>(self: Self) -> Cloned<Self> where T: 'a + Clone + Send, Self: ParallelIterator<Item = &'a T>Creates an iterator which clones all of its elements. This may be useful when you have an iterator over
&T, but you needT, and that type implementsClone. See alsocopied().Examples
use *; let a = ; let v_cloned: = a.par_iter.cloned.collect; // cloned is the same as .map(|&x| x), for integers let v_map: = a.par_iter.map.collect; assert_eq!; assert_eq!;fn copied<'a, T>(self: Self) -> Copied<Self> where T: 'a + Copy + Send, Self: ParallelIterator<Item = &'a T>Creates an iterator which copies all of its elements. This may be useful when you have an iterator over
&T, but you needT, and that type implementsCopy. See alsocloned().Examples
use *; let a = ; let v_copied: = a.par_iter.copied.collect; // copied is the same as .map(|&x| x), for integers let v_map: = a.par_iter.map.collect; assert_eq!; assert_eq!;fn inspect<OP>(self: Self, inspect_op: OP) -> Inspect<Self, OP> where OP: Fn(&<Self as >::Item) + Sync + SendApplies
inspect_opto a reference to each item of this iterator, producing a new iterator passing through the original items. This is often useful for debugging to see what's happening in iterator stages.Examples
use *; let a = ; // this iterator sequence is complex. let sum = a.par_iter .cloned .filter .reduce; println!; // let's add some inspect() calls to investigate what's happening let sum = a.par_iter .cloned .inspect .filter .inspect .reduce; println!;fn update<F>(self: Self, update_op: F) -> Update<Self, F> where F: Fn(&mut <Self as >::Item) + Sync + SendMutates each item of this iterator before yielding it.
Examples
use *; let par_iter = .into_par_iter.update; let doubles: = par_iter.collect; assert_eq!;fn filter<P>(self: Self, filter_op: P) -> Filter<Self, P> where P: Fn(&<Self as >::Item) -> bool + Sync + SendApplies
filter_opto each item of this iterator, producing a new iterator with only the items that gavetrueresults.Examples
use *; let mut par_iter = .into_par_iter.filter; let even_numbers: = par_iter.collect; assert_eq!;fn filter_map<P, R>(self: Self, filter_op: P) -> FilterMap<Self, P> where P: Fn(<Self as >::Item) -> Option<R> + Sync + Send, R: SendApplies
filter_opto each item of this iterator to get anOption, producing a new iterator with only the items fromSomeresults.Examples
use *; let mut par_iter = .into_par_iter .filter_map; let even_numbers: = par_iter.collect; assert_eq!;fn flat_map<F, PI>(self: Self, map_op: F) -> FlatMap<Self, F> where F: Fn(<Self as >::Item) -> PI + Sync + Send, PI: IntoParallelIteratorApplies
map_opto each item of this iterator to get nested parallel iterators, producing a new parallel iterator that flattens these back into one.See also
flat_map_iter.Examples
use *; let a = ; let par_iter = a.par_iter.cloned.flat_map; let vec: = par_iter.collect; assert_eq!;fn flat_map_iter<F, SI>(self: Self, map_op: F) -> FlatMapIter<Self, F> where F: Fn(<Self as >::Item) -> SI + Sync + Send, SI: IntoIterator, <SI as >::Item: SendApplies
map_opto each item of this iterator to get nested serial iterators, producing a new parallel iterator that flattens these back into one.flat_map_iterversusflat_mapThese two methods are similar but behave slightly differently. With
flat_map, each of the nested iterators must be a parallel iterator, and they will be further split up with nested parallelism. Withflat_map_iter, each nested iterator is a sequentialIterator, and we only parallelize between them, while the items produced by each nested iterator are processed sequentially.When choosing between these methods, consider whether nested parallelism suits the potential iterators at hand. If there's little computation involved, or its length is much less than the outer parallel iterator, then it may perform better to avoid the overhead of parallelism, just flattening sequentially with
flat_map_iter. If there is a lot of computation, potentially outweighing the outer parallel iterator, then the nested parallelism offlat_mapmay be worthwhile.Examples
use *; use RefCell; let a = ; let par_iter = a.par_iter.flat_map_iter; let vec: = par_iter.collect; assert_eq!;fn flatten(self: Self) -> Flatten<Self> where <Self as >::Item: IntoParallelIteratorAn adaptor that flattens parallel-iterable
Items into one large iterator.See also
flatten_iter.Examples
use *; let x: = vec!; let y: = x.into_par_iter.flatten.collect; assert_eq!;fn flatten_iter(self: Self) -> FlattenIter<Self> where <Self as >::Item: IntoIterator, <<Self as >::Item as IntoIterator>::Item: SendAn adaptor that flattens serial-iterable
Items into one large iterator.See also
flattenand the analogous comparison offlat_map_iterversusflat_map.Examples
use *; let x: = vec!; let iters: = x.into_iter.map.collect; let y: = iters.into_par_iter.flatten_iter.collect; assert_eq!;fn reduce<OP, ID>(self: Self, identity: ID, op: OP) -> <Self as >::Item where OP: Fn(<Self as >::Item, <Self as >::Item) -> <Self as >::Item + Sync + Send, ID: Fn() -> <Self as >::Item + Sync + SendReduces the items in the iterator into one item using
op. The argumentidentityshould be a closure that can produce "identity" value which may be inserted into the sequence as needed to create opportunities for parallel execution. So, for example, if you are doing a summation, thenidentity()ought to produce something that represents the zero for your type (but consider just callingsum()in that case).Examples
// Iterate over a sequence of pairs `(x0, y0), ..., (xN, yN)` // and use reduce to compute one pair `(x0 + ... + xN, y0 + ... + yN)` // where the first/second elements are summed separately. use *; let sums = .par_iter // iterating over &(i32, i32) .cloned // iterating over (i32, i32) .reduce; assert_eq!;Note: unlike a sequential
foldoperation, the order in whichopwill be applied to reduce the result is not fully specified. Soopshould be associative or else the results will be non-deterministic. And of courseidentity()should produce a true identity.fn reduce_with<OP>(self: Self, op: OP) -> Option<<Self as >::Item> where OP: Fn(<Self as >::Item, <Self as >::Item) -> <Self as >::Item + Sync + SendReduces the items in the iterator into one item using
op. If the iterator is empty,Noneis returned; otherwise,Someis returned.This version of
reduceis simple but somewhat less efficient. If possible, it is better to callreduce(), which requires an identity element.Examples
use *; let sums = .par_iter // iterating over &(i32, i32) .cloned // iterating over (i32, i32) .reduce_with .unwrap; assert_eq!;Note: unlike a sequential
foldoperation, the order in whichopwill be applied to reduce the result is not fully specified. Soopshould be associative or else the results will be non-deterministic.fn try_reduce<T, OP, ID>(self: Self, identity: ID, op: OP) -> <Self as >::Item where OP: Fn(T, T) -> <Self as >::Item + Sync + Send, ID: Fn() -> T + Sync + Send, <Self as >::Item: Try<Output = T>Reduces the items in the iterator into one item using a fallible
op. Theidentityargument is used the same way as inreduce().If a
Result::ErrorOption::Noneitem is found, or ifopreduces to one, we will attempt to stop processing the rest of the items in the iterator as soon as possible, and we will return that terminating value. Otherwise, we will return the final reducedResult::Ok(T)orOption::Some(T). If there are multiple errors in parallel, it is not specified which will be returned.Examples
use *; // Compute the sum of squares, being careful about overflow. assert_eq!; // The sum might overflow assert_eq!; // Or the squares might overflow before it even reaches `try_reduce` assert_eq!;fn try_reduce_with<T, OP>(self: Self, op: OP) -> Option<<Self as >::Item> where OP: Fn(T, T) -> <Self as >::Item + Sync + Send, <Self as >::Item: Try<Output = T>Reduces the items in the iterator into one item using a fallible
op.Like
reduce_with(), if the iterator is empty,Noneis returned; otherwise,Someis returned. Beyond that, it behaves liketry_reduce()for handlingErr/None.For instance, with
Optionitems, the return value may be:None, the iterator was emptySome(None), we stopped after encounteringNone.Some(Some(x)), the entire iterator reduced tox.
With
Resultitems, the nesting is more obvious:None, the iterator was emptySome(Err(e)), we stopped after encountering an errore.Some(Ok(x)), the entire iterator reduced tox.
Examples
use *; let files = ; // Find the biggest file files.into_par_iter .map .try_reduce_with .expect .expect_err;fn fold<T, ID, F>(self: Self, identity: ID, fold_op: F) -> Fold<Self, ID, F> where F: Fn(T, <Self as >::Item) -> T + Sync + Send, ID: Fn() -> T + Sync + Send, T: SendParallel fold is similar to sequential fold except that the sequence of items may be subdivided before it is folded. Consider a list of numbers like
22 3 77 89 46. If you used sequential fold to add them (fold(0, |a,b| a+b), you would wind up first adding 0 + 22, then 22 + 3, then 25 + 77, and so forth. The parallel fold works similarly except that it first breaks up your list into sublists, and hence instead of yielding up a single sum at the end, it yields up multiple sums. The number of results is nondeterministic, as is the point where the breaks occur.So if we did the same parallel fold (
fold(0, |a,b| a+b)) on our example list, we might wind up with a sequence of two numbers, like so:22 3 77 89 46 | | 102 135Or perhaps these three numbers:
22 3 77 89 46 | | | 102 89 46In general, Rayon will attempt to find good breaking points that keep all of your cores busy.
Fold versus reduce
The
fold()andreduce()methods each take an identity element and a combining function, but they operate rather differently.reduce()requires that the identity function has the same type as the things you are iterating over, and it fully reduces the list of items into a single item. So, for example, imagine we are iterating over a list of bytesbytes: [128_u8, 64_u8, 64_u8]. If we usedbytes.reduce(|| 0_u8, |a: u8, b: u8| a + b), we would get an overflow. This is because0,a, andbhere are all bytes, just like the numbers in the list (I wrote the types explicitly above, but those are the only types you can use). To avoid the overflow, we would need to do something likebytes.map(|b| b as u32).reduce(|| 0, |a, b| a + b), in which case our result would be256.In contrast, with
fold(), the identity function does not have to have the same type as the things you are iterating over, and you potentially get back many results. So, if we continue with thebytesexample from the previous paragraph, we could dobytes.fold(|| 0_u32, |a, b| a + (b as u32))to convert our bytes intou32. And of course we might not get back a single sum.There is a more subtle distinction as well, though it's actually implied by the above points. When you use
reduce(), your reduction function is sometimes called with values that were never part of your original parallel iterator (for example, both the left and right might be a partial sum). Withfold(), in contrast, the left value in the fold function is always the accumulator, and the right value is always from your original sequence.Fold vs Map/Reduce
Fold makes sense if you have some operation where it is cheaper to create groups of elements at a time. For example, imagine collecting characters into a string. If you were going to use map/reduce, you might try this:
use *; let s = .par_iter .map .reduce; assert_eq!;Because reduce produces the same type of element as its input, you have to first map each character into a string, and then you can reduce them. This means we create one string per element in our iterator -- not so great. Using
fold, we can do this instead:use *; let s = .par_iter .fold .reduce; assert_eq!;Now
foldwill process groups of our characters at a time, and we only make one string per group. We should wind up with some small-ish number of strings roughly proportional to the number of CPUs you have (it will ultimately depend on how busy your processors are). Note that we still need to do a reduce afterwards to combine those groups of strings into a single string.You could use a similar trick to save partial results (e.g., a cache) or something similar.
Combining fold with other operations
You can combine
foldwithreduceif you want to produce a single value. This is then roughly equivalent to a map/reduce combination in effect:use *; let bytes = 0..22_u8; let sum = bytes.into_par_iter .fold .; assert_eq!; // compare to sequentialfn fold_with<F, T>(self: Self, init: T, fold_op: F) -> FoldWith<Self, T, F> where F: Fn(T, <Self as >::Item) -> T + Sync + Send, T: Send + CloneApplies
fold_opto the giveninitvalue with each item of this iterator, finally producing the value for further use.This works essentially like
fold(|| init.clone(), fold_op), except it doesn't require theinittype to beSync, nor any other form of added synchronization.Examples
use *; let bytes = 0..22_u8; let sum = bytes.into_par_iter .fold_with .; assert_eq!; // compare to sequentialfn try_fold<T, R, ID, F>(self: Self, identity: ID, fold_op: F) -> TryFold<Self, R, ID, F> where F: Fn(T, <Self as >::Item) -> R + Sync + Send, ID: Fn() -> T + Sync + Send, R: Try<Output = T> + SendPerforms a fallible parallel fold.
This is a variation of
fold()for operations which can fail withOption::NoneorResult::Err. The first such failure stops processing the local set of items, without affecting other folds in the iterator's subdivisions.Often,
try_fold()will be followed bytry_reduce()for a final reduction and global short-circuiting effect.Examples
use *; let bytes = 0..22_u8; let sum = bytes.into_par_iter .try_fold .try_reduce; assert_eq!; // compare to sequentialfn try_fold_with<F, T, R>(self: Self, init: T, fold_op: F) -> TryFoldWith<Self, R, F> where F: Fn(T, <Self as >::Item) -> R + Sync + Send, R: Try<Output = T> + Send, T: Clone + SendPerforms a fallible parallel fold with a cloneable
initvalue.This combines the
initsemantics offold_with()and the failure semantics oftry_fold().use *; let bytes = 0..22_u8; let sum = bytes.into_par_iter .try_fold_with .try_reduce; assert_eq!; // compare to sequentialfn sum<S>(self: Self) -> S where S: Send + Sum<<Self as >::Item> + Sum<S>Sums up the items in the iterator.
Note that the order in items will be reduced is not specified, so if the
+operator is not truly associative (as is the case for floating point numbers), then the results are not fully deterministic.Basically equivalent to
self.reduce(|| 0, |a, b| a + b), except that the type of0and the+operation may vary depending on the type of value being produced.Examples
use *; let a = ; let sum: i32 = a.par_iter.sum; assert_eq!;fn product<P>(self: Self) -> P where P: Send + Product<<Self as >::Item> + Product<P>Multiplies all the items in the iterator.
Note that the order in items will be reduced is not specified, so if the
*operator is not truly associative (as is the case for floating point numbers), then the results are not fully deterministic.Basically equivalent to
self.reduce(|| 1, |a, b| a * b), except that the type of1and the*operation may vary depending on the type of value being produced.Examples
use *; assert_eq!; assert_eq!; assert_eq!;fn min(self: Self) -> Option<<Self as >::Item> where <Self as >::Item: OrdComputes the minimum of all the items in the iterator. If the iterator is empty,
Noneis returned; otherwise,Some(min)is returned.Note that the order in which the items will be reduced is not specified, so if the
Ordimpl is not truly associative, then the results are not deterministic.Basically equivalent to
self.reduce_with(|a, b| Ord::min(a, b)).Examples
use *; let a = ; assert_eq!; let b: = ; assert_eq!;fn min_by<F>(self: Self, f: F) -> Option<<Self as >::Item> where F: Sync + Send + Fn(&<Self as >::Item, &<Self as >::Item) -> OrderingComputes the minimum of all the items in the iterator with respect to the given comparison function. If the iterator is empty,
Noneis returned; otherwise,Some(min)is returned.Note that the order in which the items will be reduced is not specified, so if the comparison function is not associative, then the results are not deterministic.
Examples
use *; let a = ; assert_eq!;fn min_by_key<K, F>(self: Self, f: F) -> Option<<Self as >::Item> where K: Ord + Send, F: Sync + Send + Fn(&<Self as >::Item) -> KComputes the item that yields the minimum value for the given function. If the iterator is empty,
Noneis returned; otherwise,Some(item)is returned.Note that the order in which the items will be reduced is not specified, so if the
Ordimpl is not truly associative, then the results are not deterministic.Examples
use *; let a = ; assert_eq!;fn max(self: Self) -> Option<<Self as >::Item> where <Self as >::Item: OrdComputes the maximum of all the items in the iterator. If the iterator is empty,
Noneis returned; otherwise,Some(max)is returned.Note that the order in which the items will be reduced is not specified, so if the
Ordimpl is not truly associative, then the results are not deterministic.Basically equivalent to
self.reduce_with(|a, b| Ord::max(a, b)).Examples
use *; let a = ; assert_eq!; let b: = ; assert_eq!;fn max_by<F>(self: Self, f: F) -> Option<<Self as >::Item> where F: Sync + Send + Fn(&<Self as >::Item, &<Self as >::Item) -> OrderingComputes the maximum of all the items in the iterator with respect to the given comparison function. If the iterator is empty,
Noneis returned; otherwise,Some(max)is returned.Note that the order in which the items will be reduced is not specified, so if the comparison function is not associative, then the results are not deterministic.
Examples
use *; let a = ; assert_eq!;fn max_by_key<K, F>(self: Self, f: F) -> Option<<Self as >::Item> where K: Ord + Send, F: Sync + Send + Fn(&<Self as >::Item) -> KComputes the item that yields the maximum value for the given function. If the iterator is empty,
Noneis returned; otherwise,Some(item)is returned.Note that the order in which the items will be reduced is not specified, so if the
Ordimpl is not truly associative, then the results are not deterministic.Examples
use *; let a = ; assert_eq!;fn chain<C>(self: Self, chain: C) -> Chain<Self, <C as >::Iter> where C: IntoParallelIterator<Item = <Self as >::Item>Takes two iterators and creates a new iterator over both.
Examples
use *; let a = ; let b = ; let par_iter = a.par_iter.chain; let chained: = par_iter.cloned.collect; assert_eq!;fn find_any<P>(self: Self, predicate: P) -> Option<<Self as >::Item> where P: Fn(&<Self as >::Item) -> bool + Sync + SendSearches for some item in the parallel iterator that matches the given predicate and returns it. This operation is similar to
findon sequential iterators but the item returned may not be the first one in the parallel sequence which matches, since we search the entire sequence in parallel.Once a match is found, we will attempt to stop processing the rest of the items in the iterator as soon as possible (just as
findstops iterating once a match is found).Examples
use *; let a = ; assert_eq!; assert_eq!;fn find_first<P>(self: Self, predicate: P) -> Option<<Self as >::Item> where P: Fn(&<Self as >::Item) -> bool + Sync + SendSearches for the sequentially first item in the parallel iterator that matches the given predicate and returns it.
Once a match is found, all attempts to the right of the match will be stopped, while attempts to the left must continue in case an earlier match is found.
For added performance, you might consider using
find_firstin conjunction with [by_exponential_blocks()][IndexedParallelIterator::by_exponential_blocks].Note that not all parallel iterators have a useful order, much like sequential
HashMapiteration, so "first" may be nebulous. If you just want the first match that discovered anywhere in the iterator,find_anyis a better choice.Examples
use *; let a = ; assert_eq!; assert_eq!;fn find_last<P>(self: Self, predicate: P) -> Option<<Self as >::Item> where P: Fn(&<Self as >::Item) -> bool + Sync + SendSearches for the sequentially last item in the parallel iterator that matches the given predicate and returns it.
Once a match is found, all attempts to the left of the match will be stopped, while attempts to the right must continue in case a later match is found.
Note that not all parallel iterators have a useful order, much like sequential
HashMapiteration, so "last" may be nebulous. When the order doesn't actually matter to you,find_anyis a better choice.Examples
use *; let a = ; assert_eq!; assert_eq!;fn find_map_any<P, R>(self: Self, predicate: P) -> Option<R> where P: Fn(<Self as >::Item) -> Option<R> + Sync + Send, R: SendApplies the given predicate to the items in the parallel iterator and returns any non-None result of the map operation.
Once a non-None value is produced from the map operation, we will attempt to stop processing the rest of the items in the iterator as soon as possible.
Note that this method only returns some item in the parallel iterator that is not None from the map predicate. The item returned may not be the first non-None value produced in the parallel sequence, since the entire sequence is mapped over in parallel.
Examples
use *; let c = ; let found_number = c.par_iter.find_map_any; assert_eq!;fn find_map_first<P, R>(self: Self, predicate: P) -> Option<R> where P: Fn(<Self as >::Item) -> Option<R> + Sync + Send, R: SendApplies the given predicate to the items in the parallel iterator and returns the sequentially first non-None result of the map operation.
Once a non-None value is produced from the map operation, all attempts to the right of the match will be stopped, while attempts to the left must continue in case an earlier match is found.
Note that not all parallel iterators have a useful order, much like sequential
HashMapiteration, so "first" may be nebulous. If you just want the first non-None value discovered anywhere in the iterator,find_map_anyis a better choice.Examples
use *; let c = ; let first_number = c.par_iter.find_map_first; assert_eq!;fn find_map_last<P, R>(self: Self, predicate: P) -> Option<R> where P: Fn(<Self as >::Item) -> Option<R> + Sync + Send, R: SendApplies the given predicate to the items in the parallel iterator and returns the sequentially last non-None result of the map operation.
Once a non-None value is produced from the map operation, all attempts to the left of the match will be stopped, while attempts to the right must continue in case a later match is found.
Note that not all parallel iterators have a useful order, much like sequential
HashMapiteration, so "first" may be nebulous. If you just want the first non-None value discovered anywhere in the iterator,find_map_anyis a better choice.Examples
use *; let c = ; let last_number = c.par_iter.find_map_last; assert_eq!;fn any<P>(self: Self, predicate: P) -> bool where P: Fn(<Self as >::Item) -> bool + Sync + SendSearches for some item in the parallel iterator that matches the given predicate, and if so returns true. Once a match is found, we'll attempt to stop process the rest of the items. Proving that there's no match, returning false, does require visiting every item.
Examples
use *; let a = ; let is_valid = a.par_iter.any; assert!;fn all<P>(self: Self, predicate: P) -> bool where P: Fn(<Self as >::Item) -> bool + Sync + SendTests that every item in the parallel iterator matches the given predicate, and if so returns true. If a counter-example is found, we'll attempt to stop processing more items, then return false.
Examples
use *; let a = ; let is_valid = a.par_iter.all; assert!;fn while_some<T>(self: Self) -> WhileSome<Self> where Self: ParallelIterator<Item = Option<T>>, T: SendCreates an iterator over the
Someitems of this iterator, halting as soon as anyNoneis found.Examples
use *; use ; let counter = new; let value = .into_par_iter .map .while_some .max; assert!; assert!; // should not have visited every single onefn panic_fuse(self: Self) -> PanicFuse<Self>Wraps an iterator with a fuse in case of panics, to halt all threads as soon as possible.
Panics within parallel iterators are always propagated to the caller, but they don't always halt the rest of the iterator right away, due to the internal semantics of
join. This adaptor makes a greater effort to stop processing other items sooner, with the cost of additional synchronization overhead, which may also inhibit some optimizations.Examples
If this code didn't use
panic_fuse(), it would continue processing many more items in other threads (with long sleep delays) before the panic is finally propagated.use rayon::prelude::*; use std::{thread, time}; (0..1_000_000) .into_par_iter() .panic_fuse() .for_each(|i| { // simulate some work thread::sleep(time::Duration::from_secs(1)); assert!(i > 0); // oops! });fn collect<C>(self: Self) -> C where C: FromParallelIterator<<Self as >::Item>Creates a fresh collection containing all the elements produced by this parallel iterator.
You may prefer
collect_into_vec()implemented onIndexedParallelIterator, if your underlying iterator also implements it.collect_into_vec()allocates efficiently with precise knowledge of how many elements the iterator contains, and even allows you to reuse an existing vector's backing store rather than allocating a fresh vector.See also [
collect_vec_list()][Self::collect_vec_list] for collecting into aLinkedList<Vec<T>>.Examples
use *; let sync_vec: = .into_iter.collect; let async_vec: = .into_par_iter.collect; assert_eq!;You can collect a pair of collections like
unzipfor paired items:use *; let a = ; let : = a.into_par_iter.collect; assert_eq!; assert_eq!;Or like
partition_mapforEitheritems:use *; use Either; let : = .into_par_iter.map.collect; assert_eq!; assert_eq!;You can even collect an arbitrarily-nested combination of pairs and
Either:use *; use Either; let : = .into_par_iter.map.collect; assert_eq!; assert_eq!; assert_eq!;All of that can also be combined with short-circuiting collection of
ResultorOptiontypes:use *; use Either; let result: = .into_par_iter.map.collect; let error = result.unwrap_err; assert!;fn unzip<A, B, FromA, FromB>(self: Self) -> (FromA, FromB) where Self: ParallelIterator<Item = (A, B)>, FromA: Default + Send + ParallelExtend<A>, FromB: Default + Send + ParallelExtend<B>, A: Send, B: SendUnzips the items of a parallel iterator into a pair of arbitrary
ParallelExtendcontainers.You may prefer to use
unzip_into_vecs(), which allocates more efficiently with precise knowledge of how many elements the iterator contains, and even allows you to reuse existing vectors' backing stores rather than allocating fresh vectors.Examples
use *; let a = ; let : = a.par_iter.cloned.unzip; assert_eq!; assert_eq!;Nested pairs can be unzipped too.
use *; let : = .into_par_iter .map .unzip; assert_eq!; assert_eq!; assert_eq!;fn partition<A, B, P>(self: Self, predicate: P) -> (A, B) where A: Default + Send + ParallelExtend<<Self as >::Item>, B: Default + Send + ParallelExtend<<Self as >::Item>, P: Fn(&<Self as >::Item) -> bool + Sync + SendPartitions the items of a parallel iterator into a pair of arbitrary
ParallelExtendcontainers. Items for which thepredicatereturns true go into the first container, and the rest go into the second.Note: unlike the standard
Iterator::partition, this allows distinct collection types for the left and right items. This is more flexible, but may require new type annotations when converting sequential code that used type inference assuming the two were the same.Examples
use *; let : = .into_par_iter.partition; assert_eq!; assert_eq!;fn partition_map<A, B, P, L, R>(self: Self, predicate: P) -> (A, B) where A: Default + Send + ParallelExtend<L>, B: Default + Send + ParallelExtend<R>, P: Fn(<Self as >::Item) -> Either<L, R> + Sync + Send, L: Send, R: SendPartitions and maps the items of a parallel iterator into a pair of arbitrary
ParallelExtendcontainers.Either::Leftitems go into the first container, andEither::Rightitems go into the second.Examples
use *; use Either; let : = .into_par_iter .partition_map; assert_eq!; assert_eq!;Nested
Eitherenums can be split as well.use *; use *; let : = .into_par_iter .partition_map; assert_eq!; assert_eq!; assert_eq!; assert_eq!;fn intersperse(self: Self, element: <Self as >::Item) -> Intersperse<Self> where <Self as >::Item: CloneIntersperses clones of an element between items of this iterator.
Examples
use *; let x = vec!; let r: = x.into_par_iter.intersperse.collect; assert_eq!;fn take_any(self: Self, n: usize) -> TakeAny<Self>Creates an iterator that yields
nelements from anywhere in the original iterator.This is similar to
IndexedParallelIterator::takewithout being constrained to the "first"nof the original iterator order. The taken items will still maintain their relative order where that is visible incollect,reduce, and similar outputs.Examples
use *; let result: = .into_par_iter .filter .take_any .collect; assert_eq!; assert!;fn skip_any(self: Self, n: usize) -> SkipAny<Self>Creates an iterator that skips
nelements from anywhere in the original iterator.This is similar to
IndexedParallelIterator::skipwithout being constrained to the "first"nof the original iterator order. The remaining items will still maintain their relative order where that is visible incollect,reduce, and similar outputs.Examples
use *; let result: = .into_par_iter .filter .skip_any .collect; assert_eq!; assert!;fn take_any_while<P>(self: Self, predicate: P) -> TakeAnyWhile<Self, P> where P: Fn(&<Self as >::Item) -> bool + Sync + SendCreates an iterator that takes elements from anywhere in the original iterator until the given
predicatereturnsfalse.The
predicatemay be anything -- e.g. it could be checking a fact about the item, a global condition unrelated to the item itself, or some combination thereof.If parallel calls to the
predicaterace and give different results, then thetrueresults will still take those particular items, while respecting thefalseresult from elsewhere to skip any further items.This is similar to
Iterator::take_whilewithout being constrained to the original iterator order. The taken items will still maintain their relative order where that is visible incollect,reduce, and similar outputs.Examples
use *; let result: = .into_par_iter .take_any_while .collect; assert!; assert!;use *; use AtomicUsize; use Relaxed; // Collect any group of items that sum <= 1000 let quota = new; let result: = .into_par_iter .take_any_while .collect; let sum = result.iter.; assert!;fn skip_any_while<P>(self: Self, predicate: P) -> SkipAnyWhile<Self, P> where P: Fn(&<Self as >::Item) -> bool + Sync + SendCreates an iterator that skips elements from anywhere in the original iterator until the given
predicatereturnsfalse.The
predicatemay be anything -- e.g. it could be checking a fact about the item, a global condition unrelated to the item itself, or some combination thereof.If parallel calls to the
predicaterace and give different results, then thetrueresults will still skip those particular items, while respecting thefalseresult from elsewhere to skip any further items.This is similar to
Iterator::skip_whilewithout being constrained to the original iterator order. The remaining items will still maintain their relative order where that is visible incollect,reduce, and similar outputs.Examples
use *; let result: = .into_par_iter .skip_any_while .collect; assert!; assert!;fn collect_vec_list(self: Self) -> LinkedList<Vec<<Self as >::Item>>Collects this iterator into a linked list of vectors.
This is useful when you need to condense a parallel iterator into a collection, but have no specific requirements for what that collection should be. If you plan to store the collection longer-term,
Vec<T>is, as always, likely the best default choice, despite the overhead that comes from concatenating each vector. Or, if this is anIndexedParallelIterator, you should also prefer to just collect to aVec<T>.Internally, most
FromParallelIterator/ParallelExtendimplementations use this strategy; each job collecting their chunk of the iterator to aVec<T>and those chunks getting merged into aLinkedList, before then extending the collection with each vector. This is a very efficient way to collect an unindexed parallel iterator, without much intermediate data movement.Examples
# use LinkedList; use *; let result: = .into_par_iter .filter .flat_map .collect_vec_list; // `par_iter.collect_vec_list().into_iter().flatten()` turns // a parallel iterator into a serial one let total_len = result.into_iter.flatten.count; assert_eq!;fn opt_len(self: &Self) -> Option<usize>Internal method used to define the behavior of this parallel iterator. You should not need to call this directly.
Returns the number of items produced by this iterator, if known statically. This can be used by consumers to trigger special fast paths. Therefore, if
Some(_)is returned, this iterator must only use the (indexed)Consumermethods when driving a consumer, such assplit_at(). CallingUnindexedConsumer::split_off_left()or otherUnindexedConsumermethods -- or returning an inaccurate value -- may result in panics.This method is currently used to optimize
collectfor want of true Rust specialization; it may be removed when specialization is stable.
Implementors
impl<'data, T, P> ParallelIterator for ChunkBy<'data, T, P>impl<I, T, F, R> ParallelIterator for MapWith<I, T, F>impl<T> ParallelIterator for Repeat<T>impl<'data, T: Sync + 'data> ParallelIterator for Windows<'data, T>impl<T: Send> ParallelIterator for Once<T>impl<'a, T: Send + 'a> ParallelIterator for IterMut<'a, T>impl<'data, T: Send + 'data> ParallelIterator for IterMut<'data, T>impl<I> ParallelIterator for TakeAny<I>impl<'ch> ParallelIterator for Lines<'ch>impl<I, ID, U, F> ParallelIterator for FoldChunks<I, ID, F>impl<I, P> ParallelIterator for SkipAnyWhile<I, P>impl<T: Ord + Send> ParallelIterator for IntoIter<T>impl<U, I, ID, F> ParallelIterator for TryFold<I, U, ID, F>impl<'data, T, P> ParallelIterator for SplitInclusiveMut<'data, T, P>impl<'a, K: Ord + Sync + 'a, V: Send + 'a> ParallelIterator for IterMut<'a, K, V>impl<A, B> ParallelIterator for MultiZip<(A, B)>impl<A, B, C, D, E, F, G, H> ParallelIterator for MultiZip<(A, B, C, D, E, F, G, H)>impl<'a> ParallelIterator for Drain<'a>impl<I> ParallelIterator for StepBy<I>impl<'a, T: Ord + Send> ParallelIterator for Drain<'a, T>impl<A, B> ParallelIterator for Zip<A, B>impl<T: Send> ParallelIterator for IntoIter<T>impl<'a, T: Send + 'a> ParallelIterator for IterMut<'a, T>impl<I> ParallelIterator for SkipAny<I>impl<I, P, R> ParallelIterator for FilterMap<I, P>impl<'ch, P: Pattern> ParallelIterator for Split<'ch, P>impl<T: Send> ParallelIterator for IntoIter<T>impl<'a, K: Ord + Sync + 'a, V: Sync + 'a> ParallelIterator for Iter<'a, K, V>impl<'a, T: Hash + Eq + Sync + 'a> ParallelIterator for Iter<'a, T>impl<I, U, F> ParallelIterator for FoldChunksWith<I, U, F>impl<'ch, P: Pattern> ParallelIterator for SplitInclusive<'ch, P>impl<I, F, SI> ParallelIterator for FlatMapIter<I, F>impl<I, T> ParallelIterator for WhileSome<I>impl<'a, T, I> ParallelIterator for Cloned<I>impl<'data, T, P> ParallelIterator for SplitInclusive<'data, T, P>impl<A> ParallelIterator for MultiZip<(A)>impl<I, F, PI> ParallelIterator for FlatMap<I, F>impl<A, B, C, D, E, F, G> ParallelIterator for MultiZip<(A, B, C, D, E, F, G)>impl<'ch, P: Pattern> ParallelIterator for Matches<'ch, P>impl<I, J> ParallelIterator for Interleave<I, J>impl<U, I, F> ParallelIterator for FoldWith<I, U, F>impl<T: Send> ParallelIterator for IntoIter<T>impl<T: Send> ParallelIterator for Empty<T>impl<A, B> ParallelIterator for Chain<A, B>impl<'ch> ParallelIterator for Bytes<'ch>impl<I, F> ParallelIterator for Inspect<I, F>impl<I> ParallelIterator for PanicFuse<I>impl<'a, T: Send> ParallelIterator for Drain<'a, T>impl<K: Hash + Eq + Send, V: Send> ParallelIterator for Drain<'_, K, V>impl<U, I, ID, F> ParallelIterator for Fold<I, ID, F>impl<'data, T, P> ParallelIterator for ChunkByMut<'data, T, P>impl<'data, T: Sync + 'data> ParallelIterator for Chunks<'data, T>impl<'data, T: Sync + 'data> ParallelIterator for ChunksExact<'data, T>impl<I, F, R> ParallelIterator for Map<I, F>impl<T: RangeInteger> ParallelIterator for Iter<T>impl<I, P> ParallelIterator for Positions<I, P>impl<'data, T: Send + 'data> ParallelIterator for ChunksMut<'data, T>impl<A, B, C, D, E, F> ParallelIterator for MultiZip<(A, B, C, D, E, F)>impl<'ch> ParallelIterator for SplitWhitespace<'ch>impl ParallelIterator for Iter<char>impl<A, B, C, D, E, F, G, H, I, J, K, L> ParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I, J, K, L)>impl<'data, T: Send + 'data> ParallelIterator for ChunksExactMut<'data, T>impl<A, B> ParallelIterator for ZipEq<A, B>impl<I> ParallelIterator for ExponentialBlocks<I>impl<S, B, I> ParallelIterator for WalkTreePrefix<S, B>impl<'data, T: Sync + 'data> ParallelIterator for RChunks<'data, T>impl<'ch> ParallelIterator for Chars<'ch>impl<'a, T: Sync + 'a> ParallelIterator for Iter<'a, T>impl<Iter: Iterator + Send> ParallelIterator for IterBridge<Iter>impl<'data, T: Sync + 'data> ParallelIterator for RChunksExact<'data, T>impl<I> ParallelIterator for Flatten<I>impl<I> ParallelIterator for MinLen<I>impl<'data, T: Send + 'data> ParallelIterator for RChunksMut<'data, T>impl<'data, T: Send + 'data> ParallelIterator for RChunksExactMut<'data, T>impl<'data, T: Send> ParallelIterator for Drain<'data, T>impl<'data, T: Sync + 'data> ParallelIterator for Iter<'data, T>impl<L, R> ParallelIterator for Either<L, R>impl<I> ParallelIterator for Intersperse<I>impl<S, B, I> ParallelIterator for WalkTreePostfix<S, B>impl<'a, T: Sync + 'a> ParallelIterator for Iter<'a, T>impl<T: Ord + Send> ParallelIterator for IntoIter<T>impl<D, S> ParallelIterator for Split<D, S>impl<A, B, C, D, E> ParallelIterator for MultiZip<(A, B, C, D, E)>impl<U, I, F> ParallelIterator for TryFoldWith<I, U, F>impl<'ch, P: Pattern> ParallelIterator for SplitTerminator<'ch, P>impl<I, P> ParallelIterator for Filter<I, P>impl<T: Send> ParallelIterator for IntoIter<T>impl<A, B, C, D, E, F, G, H, I, J, K> ParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I, J, K)>impl<K: Hash + Eq + Send, V: Send> ParallelIterator for IntoIter<K, V>impl<I, J> ParallelIterator for InterleaveShortest<I, J>impl<'data, T, P> ParallelIterator for SplitMut<'data, T, P>impl<I> ParallelIterator for Enumerate<I>impl<I> ParallelIterator for Take<I>impl<T: Hash + Eq + Send> ParallelIterator for Drain<'_, T>impl<'ch, P: Pattern> ParallelIterator for MatchIndices<'ch, P>impl<'a, T: Sync + 'a> ParallelIterator for Iter<'a, T>impl<'ch> ParallelIterator for EncodeUtf16<'ch>impl<T: Send> ParallelIterator for IntoIter<T>impl<A, B, C, D> ParallelIterator for MultiZip<(A, B, C, D)>impl<'a, T, I> ParallelIterator for Copied<I>impl<A, B, C, D, E, F, G, H, I, J> ParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I, J)>impl<I> ParallelIterator for UniformBlocks<I>impl<I> ParallelIterator for Skip<I>impl<'a, T: Ord + Sync + 'a> ParallelIterator for Iter<'a, T>impl<'a, T: Send + 'a> ParallelIterator for IterMut<'a, T>impl<'data, T, P> ParallelIterator for Split<'data, T, P>impl<'a, T: Send + 'a> ParallelIterator for IterMut<'a, T>impl<I> ParallelIterator for FlattenIter<I>impl<'a, K: Hash + Eq + Sync + 'a, V: Send + 'a> ParallelIterator for IterMut<'a, K, V>impl<I> ParallelIterator for Rev<I>impl<'ch> ParallelIterator for SplitAsciiWhitespace<'ch>impl<K: Ord + Send, V: Send> ParallelIterator for IntoIter<K, V>impl<T: RangeInteger> ParallelIterator for Iter<T>impl<T: Hash + Eq + Send> ParallelIterator for IntoIter<T>impl<I> ParallelIterator for MaxLen<I>impl<I> ParallelIterator for Chunks<I>impl<S, B, I> ParallelIterator for WalkTree<S, B>impl ParallelIterator for Iter<char>impl<T> ParallelIterator for RepeatN<T>impl<T: Send, N: usize> ParallelIterator for IntoIter<T, N>impl<'a, T: Ord + Sync + 'a> ParallelIterator for Iter<'a, T>impl<'ch> ParallelIterator for CharIndices<'ch>impl<I, INIT, T, F, R> ParallelIterator for MapInit<I, INIT, F>impl<A, B, C> ParallelIterator for MultiZip<(A, B, C)>impl<'a, T: Sync + 'a> ParallelIterator for Iter<'a, T>impl<I, P> ParallelIterator for TakeAnyWhile<I, P>impl<A, B, C, D, E, F, G, H, I> ParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I)>impl<I, F> ParallelIterator for Update<I, F>impl<'a, K: Hash + Eq + Sync + 'a, V: Sync + 'a> ParallelIterator for Iter<'a, K, V>