c

com.spotify.scio.values

PairSCollectionFunctions

class PairSCollectionFunctions[K, V] extends AnyRef

Extra functions available on SCollections of (key, value) pairs through an implicit conversion.

Source
PairSCollectionFunctions.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. PairSCollectionFunctions
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new PairSCollectionFunctions(self: SCollection[(K, V)])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def aggregateByKey[A, U](aggregator: MonoidAggregator[V, A, U])(implicit arg0: Coder[A], arg1: Coder[U]): SCollection[(K, U)]

    Aggregate the values of each key with MonoidAggregator.

    Aggregate the values of each key with MonoidAggregator. First each value V is mapped to A, then we reduce with a Monoid of A, then finally we present the results as U. This could be more powerful and better optimized in some cases.

  5. def aggregateByKey[A, U](aggregator: Aggregator[V, A, U])(implicit arg0: Coder[A], arg1: Coder[U]): SCollection[(K, U)]

    Aggregate the values of each key with Aggregator.

    Aggregate the values of each key with Aggregator. First each value V is mapped to A, then we reduce with a Semigroup of A, then finally we present the results as U. This could be more powerful and better optimized in some cases.

  6. def aggregateByKey[U](zeroValue: => U)(seqOp: (U, V) => U, combOp: (U, U) => U)(implicit arg0: Coder[U]): SCollection[(K, U)]

    Aggregate the values of each key, using given combine functions and a neutral "zero value".

    Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this SCollection, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

  7. def applyPerKeyDoFn[U](t: DoFn[KV[K, V], KV[K, U]])(implicit arg0: Coder[U]): SCollection[(K, U)]

    Apply a DoFn that processes KV s and wrap the output in an SCollection.

  8. def approxQuantilesByKey(numQuantiles: Int)(implicit ord: Ordering[V]): SCollection[(K, Iterable[V])]

    For each key, compute the values' data distribution using approximate N-tiles.

    For each key, compute the values' data distribution using approximate N-tiles.

    returns

    a new SCollection whose values are Iterables of the approximate N-tiles of the elements.

  9. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  10. def asMapSideInput: SideInput[Map[K, V]]

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, value], to be used with SCollection.withSideInputs.

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, value], to be used with SCollection.withSideInputs. It is required that each key of the input be associated with a single value.

    Note: the underlying map implementation is runner specific and may have performance overhead. Use asMapSingletonSideInput instead if the resulting map can fit into memory.

  11. def asMapSingletonSideInput: SideInput[Map[K, V]]

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, value], to be used with SCollection.withSideInputs.

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, value], to be used with SCollection.withSideInputs. It is required that each key of the input be associated with a single value.

    Currently, the resulting map is required to fit into memory. This is preferable to asMapSideInput if that's the case.

  12. def asMultiMapSideInput: SideInput[Map[K, Iterable[V]]]

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, Iterable[value]], to be used with SCollection.withSideInputs.

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, Iterable[value]], to be used with SCollection.withSideInputs. In contrast to asMapSideInput, it is not required that the keys in the input collection be unique.

    Note: the underlying map implementation is runner specific and may have performance overhead. Use asMultiMapSingletonSideInput instead if the resulting map can fit into memory.

  13. def asMultiMapSingletonSideInput: SideInput[Map[K, Iterable[V]]]

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, Iterable[value]], to be used with SCollection.withSideInputs.

    Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, Iterable[value]], to be used with SCollection.withSideInputs. In contrast to asMapSingletonSideInput, it is not required that the keys in the input collection be unique.

    Currently, the resulting map is required to fit into memory. This is preferable to asMultiMapSideInput if that's the case.

  14. def batchByKey(batchSize: Long, maxBufferingDuration: Duration = Duration.ZERO): SCollection[(K, Iterable[V])]

    Batches inputs to a desired batch size.

    Batches inputs to a desired batch size. Batches will contain only elements of a single key.

    Elements are buffered until there are batchSize elements buffered, at which point they are emitted to the output SCollection.

    Windows are preserved (batches contain elements from the same window). Batches may contain elements from more than one bundle.

    A time limit (in processing time) on how long an incomplete batch of elements is allowed to be buffered can be set. Once a batch is flushed to output, the timer is reset. The provided limit must be a positive duration or zero; a zero buffering duration effectively means no limit.

  15. def batchByteSizedByKey(batchByteSize: Long, maxBufferingDuration: Duration = Duration.ZERO): SCollection[(K, Iterable[V])]

    Batches inputs to a desired batch of byte size.

    Batches inputs to a desired batch of byte size. Batches will contain only elements of a single key.

    The value coder is used to determine the byte size of each element.

    Elements are buffered until there are an estimated batchByteSize bytes buffered, at which point they are emitted to the output SCollection.

    Windows are preserved (batches contain elements from the same window). Batches may contain elements from more than one bundle.

    A time limit (in processing time) on how long an incomplete batch of elements is allowed to be buffered can be set. Once a batch is flushed to output, the timer is reset. The provided limit must be a positive duration or zero; a zero buffering duration effectively means no limit.

  16. def batchWeightedByKey(weight: Long, cost: (V) => Long, maxBufferingDuration: Duration = Duration.ZERO): SCollection[(K, Iterable[V])]

    Batches inputs to a desired weight.

    Batches inputs to a desired weight. Batches will contain only elements of a single key.

    The weight of each element is computer from the provided cost function.

    Elements are buffered until the weight is reached, at which point they are emitted to the output SCollection.

    Windows are preserved (batches contain elements from the same window). Batches may contain elements from more than one bundle.

    A time limit (in processing time) on how long an incomplete batch of elements is allowed to be buffered can be set. Once a batch is flushed to output, the timer is reset. The provided limit must be a positive duration or zero; a zero buffering duration effectively means no limit.

  17. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  18. def cogroup[W1, W2, W3](rhs1: SCollection[(K, W1)], rhs2: SCollection[(K, W2)], rhs3: SCollection[(K, W3)]): SCollection[(K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3]))]

    For each key k in this or rhs1 or rhs2 or rhs3, return a resulting SCollection that contains a tuple with the list of values for that key in this, rhs1, rhs2 and rhs3.

  19. def cogroup[W1, W2](rhs1: SCollection[(K, W1)], rhs2: SCollection[(K, W2)]): SCollection[(K, (Iterable[V], Iterable[W1], Iterable[W2]))]

    For each key k in this or rhs1 or rhs2, return a resulting SCollection that contains a tuple with the list of values for that key in this, rhs1 and rhs2.

  20. def cogroup[W](rhs: SCollection[(K, W)]): SCollection[(K, (Iterable[V], Iterable[W]))]

    For each key k in this or rhs, return a resulting SCollection that contains a tuple with the list of values for that key in this as well as rhs.

  21. def combineByKey[C](createCombiner: (V) => C)(mergeValue: (C, V) => C)(mergeCombiners: (C, C) => C)(implicit arg0: Coder[C]): SCollection[(K, C)]

    Generic function to combine the elements for each key using a custom set of aggregation functions.

    Generic function to combine the elements for each key using a custom set of aggregation functions. Turns an SCollection[(K, V)] into a result of type SCollection[(K, C)], for a "combined type" C Note that V and C can be different -- for example, one might group an SCollection of type (Int, Int) into an SCollection of type (Int, Seq[Int]). Users provide three functions:

    • createCombiner, which turns a V into a C (e.g., creates a one-element list)
    • mergeValue, to merge a V into a C (e.g., adds it to the end of a list)
    • mergeCombiners, to combine two C's into a single one.

    Both mergeValue and mergeCombiners are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.

  22. def countApproxDistinctByKey(estimator: ApproxDistinctCounter[V]): SCollection[(K, Long)]

    Return a new SCollection of (key, value) pairs where value is estimated distinct count(as Long) per each unique key.

    Return a new SCollection of (key, value) pairs where value is estimated distinct count(as Long) per each unique key. Correctness of the estimation is depends on the given ApproxDistinctCounter estimator.

    returns

    a key valued SCollection where value type is Long and hold the approximate distinct count.

    Example:
    1. val input: SCollection[(K, V)] = ...
      val distinctCount: SCollection[(K, Long)] =
          input.approximateDistinctCountPerKey(ApproximateUniqueCounter(sampleSize))

      There are two known subclass of ApproxDistinctCounter available for HLL++ implementations in the scio-extra module.

      • com.spotify.scio.extra.hll.sketching.SketchingHyperLogLogPlusPlus
      • com.spotify.scio.extra.hll.zetasketch.ZetasketchHll_Counter HyperLogLog++: Google HLL++ paper
  23. def countApproxDistinctByKey(maximumEstimationError: Double = 0.02): SCollection[(K, Long)]

    Count approximate number of distinct values for each key in the SCollection.

    Count approximate number of distinct values for each key in the SCollection.

    maximumEstimationError

    the maximum estimation error, which should be in the range [0.01, 0.5].

  24. def countApproxDistinctByKey(sampleSize: Int): SCollection[(K, Long)]

    Count approximate number of distinct values for each key in the SCollection.

    Count approximate number of distinct values for each key in the SCollection.

    sampleSize

    the number of entries in the statistical sample; the higher this number, the more accurate the estimate will be; should be >= 16.

  25. def countByKey: SCollection[(K, Long)]

    Count the number of elements for each key.

    Count the number of elements for each key.

    returns

    a new SCollection of (key, count) pairs

  26. def distinctByKey: SCollection[(K, V)]

    Return a new SCollection of (key, value) pairs without duplicates based on the keys.

    Return a new SCollection of (key, value) pairs without duplicates based on the keys. The value is taken randomly for each key.

    returns

    a new SCollection of (key, value) pairs

  27. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  28. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  29. def filterValues(f: (V) => Boolean): SCollection[(K, V)]

    Return a new SCollection of (key, value) pairs whose values satisfy the predicate.

  30. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  31. def flatMapValues[U](f: (V) => TraversableOnce[U])(implicit arg0: Coder[U]): SCollection[(K, U)]

    Pass each value in the key-value pair SCollection through a flatMap function without changing the keys.

  32. def flattenValues[U](implicit arg0: Coder[U], ev: <:<[V, TraversableOnce[U]]): SCollection[(K, U)]

    Return an SCollection having its values flattened.

  33. def foldByKey(implicit mon: Monoid[V]): SCollection[(K, V)]

    Fold by key with Monoid, which defines the associative function and "zero value" for V.

    Fold by key with Monoid, which defines the associative function and "zero value" for V. This could be more powerful and better optimized in some cases.

  34. def foldByKey(zeroValue: => V)(op: (V, V) => V): SCollection[(K, V)]

    Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).

    Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.). The function op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation; however, it should not modify t2.

  35. def fullOuterJoin[W](rhs: SCollection[(K, W)]): SCollection[(K, (Option[V], Option[W]))]

    Perform a full outer join of this and rhs.

    Perform a full outer join of this and rhs. For each element (k, v) in this, the resulting SCollection will either contain all pairs (k, (Some(v), Some(w))) for w in rhs, or the pair (k, (Some(v), None)) if no elements in rhs have key k. Similarly, for each element (k, w) in rhs, the resulting SCollection will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k.

  36. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  37. def groupByKey: SCollection[(K, Iterable[V])]

    Group the values for each key in the SCollection into a single sequence.

    Group the values for each key in the SCollection into a single sequence. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting SCollection is evaluated.

    Note: This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairSCollectionFunctions.aggregateByKey or PairSCollectionFunctions.reduceByKey will provide much better performance.

    Note: As currently implemented, groupByKey must be able to hold all the key-value pairs for any key in memory. If a key has too many values, it can result in an OutOfMemoryError.

  38. def groupWith[W1, W2, W3](rhs1: SCollection[(K, W1)], rhs2: SCollection[(K, W2)], rhs3: SCollection[(K, W3)]): SCollection[(K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3]))]

    Alias for cogroup.

  39. def groupWith[W1, W2](rhs1: SCollection[(K, W1)], rhs2: SCollection[(K, W2)]): SCollection[(K, (Iterable[V], Iterable[W1], Iterable[W2]))]

    Alias for cogroup.

  40. def groupWith[W](rhs: SCollection[(K, W)]): SCollection[(K, (Iterable[V], Iterable[W]))]

    Alias for cogroup.

  41. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  42. def hashPartitionByKey(numPartitions: Int): Seq[SCollection[(K, V)]]

    Partition this SCollection using K.## into n partitions.

    Partition this SCollection using K.## into n partitions. Note that K should provide consistent hash code accross different JVM.

    numPartitions

    number of output partitions

    returns

    partitioned SCollections in a Seq

  43. def intersectByKey(rhs: SCollection[K]): SCollection[(K, V)]

    Return an SCollection with the pairs from this whose keys are in rhs.

    Return an SCollection with the pairs from this whose keys are in rhs.

    Unlike SCollection.intersection this preserves duplicates in this.

  44. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  45. def join[W](rhs: SCollection[(K, W)]): SCollection[(K, (V, W))]

    Return an SCollection containing all pairs of elements with matching keys in this and rhs.

    Return an SCollection containing all pairs of elements with matching keys in this and rhs. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in rhs.

  46. implicit lazy val keyCoder: Coder[K]
  47. def keys: SCollection[K]

    Return an SCollection with the keys of each tuple.

  48. def leftOuterJoin[W](rhs: SCollection[(K, W)]): SCollection[(K, (V, Option[W]))]

    Perform a left outer join of this and rhs.

    Perform a left outer join of this and rhs. For each element (k, v) in this, the resulting SCollection will either contain all pairs (k, (v, Some(w))) for w in rhs, or the pair (k, (v, None)) if no elements in rhs have key k.

  49. def mapKeys[U](f: (K) => U)(implicit arg0: Coder[U]): SCollection[(U, V)]

    Pass each key in the key-value pair SCollection through a map function without changing the values.

  50. def mapValues[U](f: (V) => U)(implicit arg0: Coder[U]): SCollection[(K, U)]

    Pass each value in the key-value pair SCollection through a map function without changing the keys.

  51. def maxByKey(implicit ord: Ordering[V]): SCollection[(K, V)]

    Return the max of values for each key as defined by the implicit Ordering[T].

    Return the max of values for each key as defined by the implicit Ordering[T].

    returns

    a new SCollection of (key, maximum value) pairs

  52. def minByKey(implicit ord: Ordering[V]): SCollection[(K, V)]

    Return the min of values for each key as defined by the implicit Ordering[T].

    Return the min of values for each key as defined by the implicit Ordering[T].

    returns

    a new SCollection of (key, minimum value) pairs

  53. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  54. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  55. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  56. def reduceByKey(op: (V, V) => V): SCollection[(K, V)]

    Merge the values for each key using an associative reduce function.

    Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.

  57. def reifyAsMapInGlobalWindow: SCollection[Map[K, V]]

    Returns an SCollection consisting of a single Map[K, V] element.

  58. def reifyAsMultiMapInGlobalWindow: SCollection[Map[K, Iterable[V]]]

    Returns an SCollection consisting of a single Map[K, Iterable[V]] element.

  59. def rightOuterJoin[W](rhs: SCollection[(K, W)]): SCollection[(K, (Option[V], W))]

    Perform a right outer join of this and rhs.

    Perform a right outer join of this and rhs. For each element (k, w) in rhs, the resulting SCollection will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k.

  60. def sampleByKey(withReplacement: Boolean, fractions: Map[K, Double]): SCollection[(K, V)]

    Return a subset of this SCollection sampled by key (via stratified sampling).

    Return a subset of this SCollection sampled by key (via stratified sampling).

    Create a sample of this SCollection using variable sampling rates for different keys as specified by fractions, a key to sampling rate map, via simple random sampling with one pass over the SCollection, to produce a sample of size that's approximately equal to the sum of math.ceil(numItems * samplingRate) over all key values.

    withReplacement

    whether to sample with or without replacement

    fractions

    map of specific keys to sampling rates

    returns

    SCollection containing the sampled subset

  61. def sampleByKey(sampleSize: Int): SCollection[(K, Iterable[V])]

    Return a sampled subset of values for each key of this SCollection.

    Return a sampled subset of values for each key of this SCollection.

    returns

    a new SCollection of (key, sampled values) pairs

  62. val self: SCollection[(K, V)]
  63. def sparseFullOuterJoin[W](rhs: SCollection[(K, W)], rhsNumKeys: Long, fpProb: Double = 0.01)(implicit funnel: Funnel[K]): SCollection[(K, (Option[V], Option[W]))]

    Full outer join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e.

    Full outer join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e. when the intersection of keys is sparse in the left collection. A Bloom Filter of keys from the right collection (rhs) is used to split this into 2 partitions. Only those with keys in the filter go through the join and the rest are concatenated. This is useful for joining historical aggregates with incremental updates.

    Import magnolify.guava.auto._ to get common instances of Guava Funnel s.

    Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    rhsNumKeys

    An estimate of the number of keys in the right collection rhs. This estimate is used to find the size and number of BloomFilters rhs Scio would use to split the left collection (this) into overlap and intersection in a "map" step before an exact join. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

    fpProb

    A fraction in range (0, 1) which would be the accepted false positive probability when computing the overlap. Note: having fpProb = 0 doesn't mean that Scio would calculate an exact overlap.

  64. def sparseIntersectByKey(rhs: SCollection[K], rhsNumKeys: Long, computeExact: Boolean = false, fpProb: Double = 0.01)(implicit funnel: Funnel[K]): SCollection[(K, V)]

    Return an SCollection with the pairs from this whose keys are in rhs when the cardinality of this >> rhs, but neither can fit in memory (see PairHashSCollectionFunctions.hashIntersectByKey).

    Return an SCollection with the pairs from this whose keys are in rhs when the cardinality of this >> rhs, but neither can fit in memory (see PairHashSCollectionFunctions.hashIntersectByKey).

    Unlike SCollection.intersection this preserves duplicates in this.

    Import magnolify.guava.auto._ to get common instances of Guava Funnel s.

    rhsNumKeys

    An estimate of the number of keys in rhs. This estimate is used to find the size and number of BloomFilters that Scio would use to pre-filter this in a "map" step before any join. Having a value close to the actual number improves the false positives in output. When computeExact is set to true, a more accurate estimate of the number of keys in rhs would mean less shuffle when finding the exact value.

    computeExact

    Whether or not to directly pass through bloom filter results (with a small false positive rate) or perform an additional inner join to confirm exact result set. By default this is set to false.

    fpProb

    A fraction in range (0, 1) which would be the accepted false positive probability for this transform. By default when computeExact is set to false, this reflects the probability that an output element is an incorrect intersect (meaning it may not be present in rhs) When computeExact is set to true, this fraction is used to find the acceptable false positive in the intermediate step before computing exact. Note: having fpProb = 0 doesn't mean an exact computation. This value along with rhsNumKeys is used for creating a BloomFilter.

  65. def sparseIntersectByKey[AF <: ApproxFilter[K]](sideInput: SideInput[AF]): SCollection[(K, V)]

    Return an SCollection with the pairs from this whose keys might be present in the SideInput.

    Return an SCollection with the pairs from this whose keys might be present in the SideInput.

    The SideInput[ApproxFilter] can be used reused for multiple sparse operations across multiple SCollections.

    Example:
    1. val si = pairSCollRight.asApproxFilterSideInput(BloomFilter, 1000000)
      val filtered1 = pairSColl1.sparseIntersectByKey(si)
      val filtered2 = pairSColl2.sparseIntersectByKey(si)
  66. def sparseJoin[W](rhs: SCollection[(K, W)], rhsNumKeys: Long, fpProb: Double = 0.01)(implicit funnel: Funnel[K]): SCollection[(K, (V, W))]

    Inner join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e.

    Inner join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e. when the intersection of keys is sparse in the left collection. A Bloom Filter of keys from the right collection (rhs) is used to split this into 2 partitions. Only those with keys in the filter go through the join and the rest are filtered out before the join.

    Import magnolify.guava.auto._ to get common instances of Guava Funnel s.

    Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    rhsNumKeys

    An estimate of the number of keys in the right collection rhs. This estimate is used to find the size and number of BloomFilters that Scio would use to split the left collection (this) into overlap and intersection in a "map" step before an exact join. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

    fpProb

    A fraction in range (0, 1) which would be the accepted false positive probability when computing the overlap. Note: having fpProb = 0 doesn't mean that Scio would calculate an exact overlap.

  67. def sparseLeftOuterJoin[W](rhs: SCollection[(K, W)], rhsNumKeys: Long, fpProb: Double = 0.01)(implicit funnel: Funnel[K]): SCollection[(K, (V, Option[W]))]

    Left outer join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e.

    Left outer join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e. when the intersection of keys is sparse in the left collection. A Bloom Filter of keys from the right collection (rhs) is used to split this into 2 partitions. Only those with keys in the filter go through the join and the rest are concatenated. This is useful for joining historical aggregates with incremental updates.

    Import magnolify.guava.auto._ to get common instances of Guava Funnel s.

    Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    rhsNumKeys

    An estimate of the number of keys in the right collection rhs. This estimate is used to find the size and number of BloomFilters that Scio would use to split the left collection (this) into overlap and intersection in a "map" step before an exact join. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

    fpProb

    A fraction in range (0, 1) which would be the accepted false positive probability when computing the overlap. Note: having fpProb = 0 doesn't mean that Scio would calculate an exact overlap.

  68. def sparseLookup[A, B](rhs1: SCollection[(K, A)], rhs2: SCollection[(K, B)], thisNumKeys: Long)(implicit funnel: Funnel[K]): SCollection[(K, (V, Iterable[A], Iterable[B]))]

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs.

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs. A Bloom Filter of keys in this is used to filter out irrelevant keys in rhs. This is useful when searching for a limited number of values from one or more very large tables.

    Import magnolify.guava.auto._ to get common instances of Guava Funnel s.

    Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    thisNumKeys

    An estimate of the number of keys in this. This estimate is used to find the size and number of BloomFilters that Scio would use to pre-filter rhs before doing a co-group. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

  69. def sparseLookup[A, B](rhs1: SCollection[(K, A)], rhs2: SCollection[(K, B)], thisNumKeys: Long, fpProb: Double)(implicit funnel: Funnel[K]): SCollection[(K, (V, Iterable[A], Iterable[B]))]

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs.

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs. A Bloom Filter of keys in this is used to filter out irrelevant keys in rhs. This is useful when searching for a limited number of values from one or more very large tables.

    Import magnolify.guava.auto._ to get common instances of Guava Funnel s.

    Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    thisNumKeys

    An estimate of the number of keys in this. This estimate is used to find the size and number of BloomFilters that Scio would use to pre-filter rhs1 and rhs2 before doing a co-group. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

    fpProb

    A fraction in range (0, 1) which would be the accepted false positive probability when discarding elements of rhs1 and rhs2 in the pre-filter step.

  70. def sparseLookup[A](rhs: SCollection[(K, A)], thisNumKeys: Long)(implicit funnel: Funnel[K]): SCollection[(K, (V, Iterable[A]))]

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs.

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs. A Bloom Filter of keys in this is used to filter out irrelevant keys in rhs. This is useful when searching for a limited number of values from one or more very large tables. Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    thisNumKeys

    An estimate of the number of keys in this. This estimate is used to find the size and number of BloomFilters that Scio would use to pre-filter rhs before doing a co-group. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

  71. def sparseLookup[A](rhs: SCollection[(K, A)], thisNumKeys: Long, fpProb: Double)(implicit funnel: Funnel[K]): SCollection[(K, (V, Iterable[A]))]

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs.

    Look up values from rhs where rhs is much larger and keys from this wont fit in memory, and is sparse in rhs. A Bloom Filter of keys in this is used to filter out irrelevant keys in rhs. This is useful when searching for a limited number of values from one or more very large tables. Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    thisNumKeys

    An estimate of the number of keys in this. This estimate is used to find the size and number of BloomFilters that Scio would use to pre-filter rhs before doing a co-group. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

    fpProb

    A fraction in range (0, 1) which would be the accepted false positive probability when discarding elements of rhs in the pre-filter step.

  72. def sparseRightOuterJoin[W](rhs: SCollection[(K, W)], rhsNumKeys: Long, fpProb: Double = 0.01)(implicit funnel: Funnel[K]): SCollection[(K, (Option[V], W))]

    Right outer join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e.

    Right outer join for cases when the left collection (this) is much larger than the right collection (rhs) which cannot fit in memory, but contains a mostly overlapping set of keys as the left collection, i.e. when the intersection of keys is sparse in the left collection. A Bloom Filter of keys from the right collection (rhs) is used to split this into 2 partitions. Only those with keys in the filter go through the join and the rest are concatenated. This is useful for joining historical aggregates with incremental updates.

    Import magnolify.guava.auto._ to get common instances of Guava Funnel s.

    Read more about Bloom Filter: com.google.common.hash.BloomFilter.

    rhsNumKeys

    An estimate of the number of keys in the right collection rhs. This estimate is used to find the size and number of BloomFilters that Scio would use to split the left collection (this) into overlap and intersection in a "map" step before an exact join. Having a value close to the actual number improves the false positives in intermediate steps which means less shuffle.

    fpProb

    A fraction in range (0, 1) which would be the accepted false positive probability when computing the overlap. Note: having fpProb = 0 doesn't mean that Scio would calculate an exact overlap.

  73. def subtractByKey(rhs: SCollection[K]): SCollection[(K, V)]

    Return an SCollection with the pairs from this whose keys are not in rhs.

  74. def sumByKey(implicit sg: Semigroup[V]): SCollection[(K, V)]

    Reduce by key with Semigroup.

    Reduce by key with Semigroup. This could be more powerful and better optimized than reduceByKey in some cases.

  75. def swap: SCollection[(V, K)]

    Swap the keys with the values.

  76. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  77. def toString(): String
    Definition Classes
    AnyRef → Any
  78. def topByKey(num: Int)(implicit ord: Ordering[V]): SCollection[(K, Iterable[V])]

    Return the top num (largest) values for each key from this SCollection as defined by the specified implicit Ordering[T].

    Return the top num (largest) values for each key from this SCollection as defined by the specified implicit Ordering[T].

    returns

    a new SCollection of (key, top num values) pairs

  79. implicit lazy val valueCoder: Coder[V]
  80. def values: SCollection[V]

    Return an SCollection with the values of each tuple.

  81. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  82. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  83. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  84. def withHotKeyFanout(hotKeyFanout: Int): SCollectionWithHotKeyFanout[K, V]

    Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.

    Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.

    hotKeyFanout

    constant value for every key

  85. def withHotKeyFanout(hotKeyFanout: (K) => Int): SCollectionWithHotKeyFanout[K, V]

    Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.

    Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.

    hotKeyFanout

    a function from keys to an integer N, where the key will be spread among N intermediate nodes for partial combining. If N is less than or equal to 1, this key will not be sent through an intermediate node.

Inherited from AnyRef

Inherited from Any

CoGroup Operations

collection

Join Operations

Per Key Aggregations

Transformations

Ungrouped