HDFS读流程解析(附中文翻译)

本文详细解析了Hadoop HDFS的读取流程,包括客户端如何通过FileSystem对象打开文件,DistributedFileSystem通过RPC与NameNode交互获取数据块位置,DFSInputStream管理与DataNode的I/O,以及在通信错误和数据校验中的处理策略。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Hadoop HDFS Data Read Operations

HDFS读流程图
HDFS读流程图
i) Client opens the file it wishes to read by calling open() on the FileSystem object, which for HDFS is an instance of DistributedFileSystem.

客户端打开文件, 并调用FileSystem类的open()方法来读文件. 对于HDFS而言是一个DistributedFileSystem的实例.

ii) DistributedFileSystem calls the namenode using RPC to determine the locations of the blocks for the first few blocks in the file. For each block, the namenode returns the addresses of the datanodes that have a copy of that block and datanode are sorted according to their proximity to the client.

DistributedFileSystem发送请求给namenode使用RPC(Rmove Procedure Call)去确定文件中前几个数据块的地址. 对每个数据块, namenode返回拥有该数据块的datanode的地址, 并根据datanode离客户端的距离对他们进行排序.

iii) DistributedFileSystem returns a FSDataInputStream to the client for it to read data from. FSDataInputStream, thus, wraps the DFSInputStream which manages the datanode and namenode I/O. Client calls read() on the stream. DFSInputStream which has stored the datanode addresses then connects to the closest datanode for the first block in the file.

DistributedFileSystem返回一个FSDataInputStream给客户端来读取数据.因此 FSDataInputSrteam包装管理datanode和namenode I/O的DFSInputStream. 客户端发送read()方法给stream. DFSInputStram存储了datanode的地址, 然后连接离文件中第一个数据块最近的datanode.

iv) Data is streamed from the datanode back to the client, as a result client can call read() repeatedly on the stream. When the block ends, DFSInputStream will close the connection to the datanode and then finds the best datanode for the next block.

数据从datanode以流的形式返回客户端, 客户端在流上重复read()方法. 当数据块结束时, DFSInputStream将关闭与datanode的连接并寻找下一个数据块最佳的datanode.

v) If the DFSInputStream encounters an error while communicating with a datanode, it will try the next closest one for that block. It will also remember datanodes that have failed so that it doesn’t needlessly retry them for later blocks. The DFSInputStream also verifies checksums for the data transferred to it from the datanode. If it finds a block, it reports this to the namenode before the DFSInputStream attempts to read a replica of the block from another datanode.

如果DFSInputStream在与datanode通信时发生错误, 它将试图连接离数据块最近的一个节点. 同时还会记住连接失败的datanode, 以防在连接其他块的时候重复访问他们. DFSinputStream也会核实从datanode传输的数据的校验码. 如果发现损坏块, 它将在DFSInputStream试图从其他datanode读取块副本之前, 报告给namenode.

vi) When the client has finished reading the data, it calls close() on the stream.

当客户端完成数据读取后将会调用流的close()方法.

为了确保博主自身的理解和翻译不影响到别人, 特地附上英文原版.
本文章英文部分搬运自https://data-flair.training/blogs/hadoop-hdfs-data-read-and-write-operations/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值