checkpoint,首先要调用SparkContext的setCheckpointDir()方法,设置一个容错的文件系统的目录,比如说HDFS;然后,对RDD调用调用checkpoint()方法。之后,在RDD所处的job运行结束之后,会启动一个单独的job,来将checkpoint过的RDD的数据写入之前设置的文件系统,进行高可用、容错的类持久化操作。
那么此时,即使在后面使用RDD时,它的持久化的数据,不小心丢失了,但是还是可以从它的checkpoint文件中直接读取其数据,而不需要重新计算。
1. 对RDD调用checkpoint方法后,该rdd就会受RDDCheckpointData对象的管理。
2. RDDCheckpointData将调用了checkpoint方法的RDD状态设置为MarkedForCheckpoint。
3. RDD所在job运行结束后,会调用job中最后一个RDD的doCheckpoint方法,该方法沿着finalRDD的lineage向上查找标记为MarkedForCheckpoint的RDD,并将其标记为CheckpointingInProgress。
4. 启动一个单独的job,来将lineage中标记为CheckpointingInProgress的RDD进行checkpoint操作;也就是将其数据写入SparkContext.setCheckpointDir()方法设置的文件系统中。
5. 将RDD的数据进行checkpoint之后,会改变RDD的lineage:清除掉RDD所有的以来RDD,并强行将其父RDD设置为一个CheckpointRDD,而且RDD的状态变成Checkpointed。
通常建议,对要checkpoint的RDD,先使用persist(StorageLevel.DISK_ONLY),这样RDD计算之后,就会被直接写到磁盘上去。然后,checkpoint操作时,直接从磁盘上读取RDD数据,并checkpoint到外部文件系统即可。
如果checkpoint时没有持久化RDD,那么当job执行结束后,由于中间的RDD没有持久化,那么checkpoint job想要将RDD数据写入外部文件系统的话,还得从RDD之前所有 RDD全部重新计算一遍。
checkpoint vs 持久化:
持久化只是将数据保存在BlockManager中,但是RDD的lineage不变。
checkpoint执行完之后,RDD已经没有之前依赖的RDD了,而只有一个强行为其设置的CheckpointRDD。
持久化数据丢失的可能性更大,磁盘或内存数据比较容易丢失。checkpoint的数据是放在HDFS中的,具有高容错性。
// RDD 类
/**
* Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
* This should ''not'' be called by users directly, but is available for implementors of custom
* subclasses of RDD.
*/
final def iterator(split: Partition, context: TaskContext): Iterator[T] = {
if (storageLevel != StorageLevel.NONE) {
getOrCompute(split, context)
} else {
// StorageLevel没有设置,则从检查点恢复数据,或直接计算
computeOrReadCheckpoint(split, context)
}
}
// RDD类
/**
* Compute an RDD partition or read it from a checkpoint if the RDD is checkpointing.
*/
private[spark] def computeOrReadCheckpoint(split: Partition, context: TaskContext): Iterator[T] =
{
if (isCheckpointedAndMaterialized) {
firstParent[T].iterator(split, context) // 从检查点恢复数据,调用Checkpoint compute方法
} else {
compute(split, context)
}
}
private[spark] abstract class CheckpointRDD[T: ClassTag](sc: SparkContext)
extends RDD[T](sc, Nil) {
// CheckpointRDD should not be checkpointed again
override def doCheckpoint(): Unit = { }
override def checkpoint(): Unit = { }
override def localCheckpoint(): this.type = this
// Note: There is a bug in MiMa that complains about `AbstractMethodProblem`s in the
// base [[org.apache.spark.rdd.RDD]] class if we do not override the following methods.
// scalastyle:off
protected override def getPartitions: Array[Partition] = ???
override def compute(p: Partition, tc: TaskContext): Iterator[T] = ???
// scalastyle:on
}
CheckpointRDD有两个实现类:LocalCheckpointRDD 和 ReliableCheckpointRDD
正常逻辑走ReliableCheckpointRDD。
// ReliableCheckpointRDD类
/**
* Read the content of the checkpoint file associated with the given partition.
*/
override def compute(split: Partition, context: TaskContext): Iterator[T] = {
val file = new Path(checkpointPath, ReliableCheckpointRDD.checkpointFileName(split.index))
ReliableCheckpointRDD.readCheckpointFile(file, broadcastedConf, context)
}
// ReliableCheckpointRDD类
/**
* Read the content of the specified checkpoint file.
*/
def readCheckpointFile[T](
path: Path,
broadcastedConf: Broadcast[SerializableConfiguration],
context: TaskContext): Iterator[T] = {
val env = SparkEnv.get
val fs = path.getFileSystem(broadcastedConf.value.value)
val bufferSize = env.conf.getInt("spark.buffer.size", 65536)
val fileInputStream = {
val fileStream = fs.open(path, bufferSize)
if (env.conf.get(CHECKPOINT_COMPRESS)) {
CompressionCodec.createCodec(env.conf).compressedInputStream(fileStream)
} else {
fileStream
}
}
val serializer = env.serializer.newInstance()
val deserializeStream = serializer.deserializeStream(fileInputStream)
// Register an on-task-completion callback to close the input stream.
context.addTaskCompletionListener[Unit](context => deserializeStream.close())
deserializeStream.asIterator.asInstanceOf[Iterator[T]]
}