详解Spark核心算子 : aggregateByKey和combineByKey
aggregateByKey
aggregateByKey有三种声明
def aggregateByKey[U: ClassTag](zeroValue: U, partitioner: Partitioner)
(seqOp: (U, V) => U, combOp: (U, U) => U): RDD[(K, U)]
def aggregateByKey[U: ClassTag](zeroValue: U, numPartitions: Int)
(seqOp: (U, V) => U, combOp: (U, U) => U): RDD[(K