Trait IndexedParallelIterator
trait IndexedParallelIterator: ParallelIterator
An iterator that supports "random access" to its data, meaning that you can split it at arbitrary indices and draw data from those points.
Note: Not implemented for u64, i64, u128, or i128 ranges
Required Methods
fn len(self: &Self) -> usizeProduces an exact count of how many items this iterator will produce, presuming no panic occurs.
Examples
use *; let par_iter = .into_par_iter.zip; assert_eq!; let vec: = par_iter.collect; assert_eq!;fn drive<C: Consumer<<Self as >::Item>>(self: Self, consumer: C) -> <C as >::ResultInternal method used to define the behavior of this parallel iterator. You should not need to call this directly.
This method causes the iterator
selfto start producing items and to feed them to the consumerconsumerone by one. It may split the consumer before doing so to create the opportunity to produce in parallel. If a split does happen, it will inform the consumer of the index where the split should occur (unlikeParallelIterator::drive_unindexed()).See the README for more details on the internals of parallel iterators.
fn with_producer<CB: ProducerCallback<<Self as >::Item>>(self: Self, callback: CB) -> <CB as >::OutputInternal method used to define the behavior of this parallel iterator. You should not need to call this directly.
This method converts the iterator into a producer P and then invokes
callback.callback()with P. Note that the type of this producer is not defined as part of the API, sincecallbackmust be defined generically for all producers. This allows the producer type to contain references; it also means that parallel iterators can adjust that type without causing a breaking change.See the README for more details on the internals of parallel iterators.
Provided Methods
fn by_exponential_blocks(self: Self) -> ExponentialBlocks<Self>Divides an iterator into sequential blocks of exponentially-increasing size.
Normally, parallel iterators are recursively divided into tasks in parallel. This adaptor changes the default behavior by splitting the iterator into a sequence of parallel iterators of increasing sizes. Sizes grow exponentially in order to avoid creating too many blocks. This also allows to balance the current block with all previous ones.
This can have many applications but the most notable ones are:
- better performance with [
find_first()][ParallelIterator::find_first] - more predictable performance with [
find_any()][ParallelIterator::find_any] or any interruptible computation
Examples
use *; assert_eq!In this example, without blocks, rayon will split the initial range into two but all work on the right hand side (from 5,000 onwards) is useless since the sequential algorithm never goes there. This means that if two threads are used there will be no speedup at all.
by_exponential_blockson the other hand will start with the leftmost range from 0 top(threads number), continue with p to 3p, the 3p to 7p...Each subrange is treated in parallel, while all subranges are treated sequentially. We therefore ensure a logarithmic number of blocks (and overhead) while guaranteeing we stop at the first block containing the searched data.
- better performance with [
fn by_uniform_blocks(self: Self, block_size: usize) -> UniformBlocks<Self>Divides an iterator into sequential blocks of the given size.
Normally, parallel iterators are recursively divided into tasks in parallel. This adaptor changes the default behavior by splitting the iterator into a sequence of parallel iterators of given
block_size. The main application is to obtain better memory locality (especially if the reduce operation re-use folded data).Panics if
block_sizeis 0.Example
use *; // during most reductions v1 and v2 fit the cache let v = .into_par_iter .by_uniform_blocks .fold .reduce; assert_eq!;fn collect_into_vec(self: Self, target: &mut Vec<<Self as >::Item>)Collects the results of the iterator into the specified vector. The vector is always cleared before execution begins. If possible, reusing the vector across calls can lead to better performance since it reuses the same backing buffer.
Examples
use *; // any prior data will be cleared let mut vec = vec!; .into_par_iter .collect_into_vec; assert_eq!;fn unzip_into_vecs<A, B>(self: Self, left: &mut Vec<A>, right: &mut Vec<B>) where Self: IndexedParallelIterator<Item = (A, B)>, A: Send, B: SendUnzips the results of the iterator into the specified vectors. The vectors are always cleared before execution begins. If possible, reusing the vectors across calls can lead to better performance since they reuse the same backing buffer.
Examples
use *; // any prior data will be cleared let mut left = vec!; let mut right = vec!; .into_par_iter .enumerate .unzip_into_vecs; assert_eq!; assert_eq!;fn zip<Z>(self: Self, zip_op: Z) -> Zip<Self, <Z as >::Iter> where Z: IntoParallelIterator, <Z as >::Iter: IndexedParallelIteratorIterates over tuples
(A, B), where the itemsAare from this iterator andBare from the iterator given as argument. Like thezipmethod on ordinary iterators, if the two iterators are of unequal length, you only get the items they have in common.Examples
use *; let result: = .into_par_iter .zip .collect; assert_eq!;fn zip_eq<Z>(self: Self, zip_op: Z) -> ZipEq<Self, <Z as >::Iter> where Z: IntoParallelIterator, <Z as >::Iter: IndexedParallelIteratorThe same as
Zip, but requires that both iterators have the same length.Panics
Will panic if
selfandzip_opare not the same length.use rayon::prelude::*; let one = [1u8]; let two = [2u8, 2]; let one_iter = one.par_iter(); let two_iter = two.par_iter(); // this will panic let zipped: Vec<(&u8, &u8)> = one_iter.zip_eq(two_iter).collect(); // we should never get here assert_eq!(1, zipped.len());fn interleave<I>(self: Self, other: I) -> Interleave<Self, <I as >::Iter> where I: IntoParallelIterator<Item = <Self as >::Item>, <I as >::Iter: IndexedParallelIterator<Item = <Self as >::Item>Interleaves elements of this iterator and the other given iterator. Alternately yields elements from this iterator and the given iterator, until both are exhausted. If one iterator is exhausted before the other, the last elements are provided from the other.
Examples
use *; let = ; let r: = x.into_par_iter.interleave.collect; assert_eq!;fn interleave_shortest<I>(self: Self, other: I) -> InterleaveShortest<Self, <I as >::Iter> where I: IntoParallelIterator<Item = <Self as >::Item>, <I as >::Iter: IndexedParallelIterator<Item = <Self as >::Item>Interleaves elements of this iterator and the other given iterator, until one is exhausted.
Examples
use *; let = ; let r: = x.into_par_iter.interleave_shortest.collect; assert_eq!;fn chunks(self: Self, chunk_size: usize) -> Chunks<Self>Splits an iterator up into fixed-size chunks.
Returns an iterator that returns
Vecs of the given number of elements. If the number of elements in the iterator is not divisible bychunk_size, the last chunk may be shorter thanchunk_size.See also
par_chunks()andpar_chunks_mut()for similar behavior on slices, without having to allocate intermediateVecs for the chunks.Panics if
chunk_sizeis 0.Examples
use *; let a = vec!; let r: = a.into_par_iter.chunks.collect; assert_eq!;fn fold_chunks<T, ID, F>(self: Self, chunk_size: usize, identity: ID, fold_op: F) -> FoldChunks<Self, ID, F> where ID: Fn() -> T + Send + Sync, F: Fn(T, <Self as >::Item) -> T + Send + Sync, T: SendSplits an iterator into fixed-size chunks, performing a sequential
fold()on each chunk.Returns an iterator that produces a folded result for each chunk of items produced by this iterator.
This works essentially like:
iter.chunks(chunk_size) .map(|chunk| chunk.into_iter() .fold(identity, fold_op) )except there is no per-chunk allocation overhead.
Panics if
chunk_sizeis 0.Examples
use *; let nums = vec!; let chunk_sums = nums.into_par_iter.fold_chunks.; assert_eq!;fn fold_chunks_with<T, F>(self: Self, chunk_size: usize, init: T, fold_op: F) -> FoldChunksWith<Self, T, F> where T: Send + Clone, F: Fn(T, <Self as >::Item) -> T + Send + SyncSplits an iterator into fixed-size chunks, performing a sequential
fold()on each chunk.Returns an iterator that produces a folded result for each chunk of items produced by this iterator.
This works essentially like
fold_chunks(chunk_size, || init.clone(), fold_op), except it doesn't require theinittype to beSync, nor any other form of added synchronization.Panics if
chunk_sizeis 0.Examples
use *; let nums = vec!; let chunk_sums = nums.into_par_iter.fold_chunks_with.; assert_eq!;fn cmp<I>(self: Self, other: I) -> Ordering where I: IntoParallelIterator<Item = <Self as >::Item>, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: OrdLexicographically compares the elements of this
ParallelIteratorwith those of another.Examples
use *; use *; let x = vec!; assert_eq!; assert_eq!; assert_eq!;fn partial_cmp<I>(self: Self, other: I) -> Option<Ordering> where I: IntoParallelIterator, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: PartialOrd<<I as >::Item>Lexicographically compares the elements of this
ParallelIteratorwith those of another.Examples
use *; use *; use NAN; let x = vec!; assert_eq!; assert_eq!; assert_eq!; assert_eq!;fn eq<I>(self: Self, other: I) -> bool where I: IntoParallelIterator, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: PartialEq<<I as >::Item>Determines if the elements of this
ParallelIteratorare equal to those of anotherfn ne<I>(self: Self, other: I) -> bool where I: IntoParallelIterator, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: PartialEq<<I as >::Item>Determines if the elements of this
ParallelIteratorare unequal to those of anotherfn lt<I>(self: Self, other: I) -> bool where I: IntoParallelIterator, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: PartialOrd<<I as >::Item>Determines if the elements of this
ParallelIteratorare lexicographically less than those of another.fn le<I>(self: Self, other: I) -> bool where I: IntoParallelIterator, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: PartialOrd<<I as >::Item>Determines if the elements of this
ParallelIteratorare less or equal to those of another.fn gt<I>(self: Self, other: I) -> bool where I: IntoParallelIterator, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: PartialOrd<<I as >::Item>Determines if the elements of this
ParallelIteratorare lexicographically greater than those of another.fn ge<I>(self: Self, other: I) -> bool where I: IntoParallelIterator, <I as >::Iter: IndexedParallelIterator, <Self as >::Item: PartialOrd<<I as >::Item>Determines if the elements of this
ParallelIteratorare less or equal to those of another.fn enumerate(self: Self) -> Enumerate<Self>Yields an index along with each item.
Examples
use *; let chars = vec!; let result: = chars .into_par_iter .enumerate .collect; assert_eq!;fn step_by(self: Self, step: usize) -> StepBy<Self>Creates an iterator that steps by the given amount
Examples
use *; let range = ; let result: = range .into_par_iter .step_by .collect; assert_eq!fn skip(self: Self, n: usize) -> Skip<Self>Creates an iterator that skips the first
nelements.Examples
use *; let result: = .into_par_iter .skip .collect; assert_eq!;fn take(self: Self, n: usize) -> Take<Self>Creates an iterator that yields the first
nelements.Examples
use *; let result: = .into_par_iter .take .collect; assert_eq!;fn position_any<P>(self: Self, predicate: P) -> Option<usize> where P: Fn(<Self as >::Item) -> bool + Sync + SendSearches for some item in the parallel iterator that matches the given predicate, and returns its index. Like
ParallelIterator::find_any, the parallel search will not necessarily find the first match, and once a match is found we'll attempt to stop processing any more.Examples
use *; let a = ; let i = a.par_iter.position_any.expect; assert!; assert_eq!;fn position_first<P>(self: Self, predicate: P) -> Option<usize> where P: Fn(<Self as >::Item) -> bool + Sync + SendSearches for the sequentially first item in the parallel iterator that matches the given predicate, and returns its index.
Like
ParallelIterator::find_first, once a match is found, all attempts to the right of the match will be stopped, while attempts to the left must continue in case an earlier match is found.Note that not all parallel iterators have a useful order, much like sequential
HashMapiteration, so "first" may be nebulous. If you just want the first match that discovered anywhere in the iterator,position_anyis a better choice.Examples
use *; let a = ; assert_eq!; assert_eq!;fn position_last<P>(self: Self, predicate: P) -> Option<usize> where P: Fn(<Self as >::Item) -> bool + Sync + SendSearches for the sequentially last item in the parallel iterator that matches the given predicate, and returns its index.
Like
ParallelIterator::find_last, once a match is found, all attempts to the left of the match will be stopped, while attempts to the right must continue in case a later match is found.Note that not all parallel iterators have a useful order, much like sequential
HashMapiteration, so "last" may be nebulous. When the order doesn't actually matter to you,position_anyis a better choice.Examples
use *; let a = ; assert_eq!; assert_eq!;fn positions<P>(self: Self, predicate: P) -> Positions<Self, P> where P: Fn(<Self as >::Item) -> bool + Sync + SendSearches for items in the parallel iterator that match the given predicate, and returns their indices.
Examples
use *; let primes = vec!; // Find the positions of primes congruent to 1 modulo 6 let p1mod6: = primes.par_iter.positions.collect; assert_eq!; // primes 7, 13, and 19 // Find the positions of primes congruent to 5 modulo 6 let p5mod6: = primes.par_iter.positions.collect; assert_eq!; // primes 5, 11, 17, 23, and 29fn rev(self: Self) -> Rev<Self>Produces a new iterator with the elements of this iterator in reverse order.
Examples
use *; let result: = .into_par_iter .rev .collect; assert_eq!;fn with_min_len(self: Self, min: usize) -> MinLen<Self>Sets the minimum length of iterators desired to process in each rayon job. Rayon will not split any smaller than this length, but of course an iterator could already be smaller to begin with.
Producers like
zipandinterleavewill use greater of the two minimums. Chained iterators and iterators insideflat_mapmay each use their own minimum length.Examples
use *; let min = .into_par_iter .with_min_len .fold // count how many are in this segment .min.unwrap; assert!;fn with_max_len(self: Self, max: usize) -> MaxLen<Self>Sets the maximum length of iterators desired to process in each rayon job. Rayon will try to split at least below this length, unless that would put it below the length from
with_min_len(). For example, given min=10 and max=15, a length of 16 will not be split any further.Producers like
zipandinterleavewill use lesser of the two maximums. Chained iterators and iterators insideflat_mapmay each use their own maximum length.Examples
use *; let max = .into_par_iter .with_max_len .fold // count how many are in this segment .max.unwrap; assert!;
Implementors
impl<'data, T: Send + 'data> IndexedParallelIterator for ChunksExactMut<'data, T>impl<I> IndexedParallelIterator for Intersperse<I>impl<A, B, C, D, E> IndexedParallelIterator for MultiZip<(A, B, C, D, E)>impl<T: IndexedRangeInteger> IndexedParallelIterator for Iter<T>impl<A, B, C, D, E, F, G, H, I, J, K> IndexedParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I, J, K)>impl<I, J> IndexedParallelIterator for InterleaveShortest<I, J>impl IndexedParallelIterator for Iter<char>impl<'data, T: Sync + 'data> IndexedParallelIterator for Iter<'data, T>impl<I> IndexedParallelIterator for Enumerate<I>impl<I> IndexedParallelIterator for Take<I>impl<'data, T: Sync + 'data> IndexedParallelIterator for ChunksExact<'data, T>impl<L, R> IndexedParallelIterator for Either<L, R>impl<'a, T: Send + 'a> IndexedParallelIterator for IterMut<'a, T>impl<T: Send> IndexedParallelIterator for IntoIter<T>impl<A, B, C, D> IndexedParallelIterator for MultiZip<(A, B, C, D)>impl<A, B, C, D, E, F, G, H, I, J> IndexedParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I, J)>impl<I> IndexedParallelIterator for Skip<I>impl<'a, T: Ord + Sync + 'a> IndexedParallelIterator for Iter<'a, T>impl<'data, T: Send + 'data> IndexedParallelIterator for RChunksMut<'data, T>impl<'a, T: Send + 'a> IndexedParallelIterator for IterMut<'a, T>impl<I> IndexedParallelIterator for Rev<I>impl<I> IndexedParallelIterator for MaxLen<I>impl<I> IndexedParallelIterator for Chunks<I>impl<'a, T: Send + 'a> IndexedParallelIterator for IterMut<'a, T>impl<T: Send> IndexedParallelIterator for IntoIter<T>impl<T> IndexedParallelIterator for RepeatN<T>impl<I, INIT, T, F, R> IndexedParallelIterator for MapInit<I, INIT, F>impl<T: Send> IndexedParallelIterator for IntoIter<T>impl<A, B, C> IndexedParallelIterator for MultiZip<(A, B, C)>impl<A, B, C, D, E, F, G, H, I> IndexedParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I)>impl<I, F> IndexedParallelIterator for Update<I, F>impl<I, T, F, R> IndexedParallelIterator for MapWith<I, T, F>impl<'data, T: Sync + 'data> IndexedParallelIterator for RChunks<'data, T>impl<T: Send> IndexedParallelIterator for Once<T>impl<I, ID, U, F> IndexedParallelIterator for FoldChunks<I, ID, F>impl<T: Send> IndexedParallelIterator for IntoIter<T>impl<T: Ord + Send> IndexedParallelIterator for IntoIter<T>impl<'data, T: Sync + 'data> IndexedParallelIterator for Windows<'data, T>impl<A, B> IndexedParallelIterator for MultiZip<(A, B)>impl<A, B, C, D, E, F, G, H> IndexedParallelIterator for MultiZip<(A, B, C, D, E, F, G, H)>impl<I> IndexedParallelIterator for StepBy<I>impl<'a, T: Ord + Send> IndexedParallelIterator for Drain<'a, T>impl<A, B> IndexedParallelIterator for Zip<A, B>impl<'data, T: Send + 'data> IndexedParallelIterator for ChunksMut<'data, T>impl<T: IndexedRangeInteger> IndexedParallelIterator for Iter<T>impl<I, U, F> IndexedParallelIterator for FoldChunksWith<I, U, F>impl IndexedParallelIterator for Iter<char>impl<'a, T, I> IndexedParallelIterator for Cloned<I>impl<'data, T: Send + 'data> IndexedParallelIterator for RChunksExactMut<'data, T>impl<A> IndexedParallelIterator for MultiZip<(A)>impl<A, B, C, D, E, F, G> IndexedParallelIterator for MultiZip<(A, B, C, D, E, F, G)>impl<I, J> IndexedParallelIterator for Interleave<I, J>impl<'data, T: Sync + 'data> IndexedParallelIterator for Chunks<'data, T>impl<'data, T: Send> IndexedParallelIterator for Drain<'data, T>impl<T: Send> IndexedParallelIterator for Empty<T>impl<A, B> IndexedParallelIterator for Chain<A, B>impl<'a, T: Sync + 'a> IndexedParallelIterator for Iter<'a, T>impl<I, F> IndexedParallelIterator for Inspect<I, F>impl<I> IndexedParallelIterator for PanicFuse<I>impl<'a, T: Send> IndexedParallelIterator for Drain<'a, T>impl<'a, T, I> IndexedParallelIterator for Copied<I>impl<'data, T: Sync + 'data> IndexedParallelIterator for RChunksExact<'data, T>impl<I, F, R> IndexedParallelIterator for Map<I, F>impl<A, B, C, D, E, F> IndexedParallelIterator for MultiZip<(A, B, C, D, E, F)>impl<A, B, C, D, E, F, G, H, I, J, K, L> IndexedParallelIterator for MultiZip<(A, B, C, D, E, F, G, H, I, J, K, L)>impl<'a, T: Sync + 'a> IndexedParallelIterator for Iter<'a, T>impl<A, B> IndexedParallelIterator for ZipEq<A, B>impl<'data, T: Send + 'data> IndexedParallelIterator for IterMut<'data, T>impl<'a, T: Sync + 'a> IndexedParallelIterator for Iter<'a, T>impl<T: Send, N: usize> IndexedParallelIterator for IntoIter<T, N>impl<I> IndexedParallelIterator for MinLen<I>