fsimage和edits_log

fsimage和edits_log分析

1. 操作过程

》》[hadoop@master ~]$ hdfs namenode -format
》》[hadoop@master ~]$ start-dfs.sh 
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/soft/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master.out
slave03: starting datanode, logging to /home/hadoop/soft/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave03.out
slave01: starting datanode, logging to /home/hadoop/soft/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave01.out
slave02: starting datanode, logging to /home/hadoop/soft/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave02.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/soft/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-master.out
》[hadoop@master ~]$ hadoop fs -mkdir /mumu
》[hadoop@master ~]$ hadoop fs -mkdir /public
》[hadoop@master ~]$ hadoop fs -put test.txt /Hello.java
》[hadoop@master ~]$ hadoop fs -put Tree/ /mumu/
》[hadoop@master ~]$ hadoop fs -rm -r /public
19/08/03 09:50:49 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /public
》[hadoop@master ~]$ hadoop fs -ls -R /
-rw-r--r--   3 hadoop supergroup        106 2019-08-03 09:48 /Hello.java
drwxr-xr-x   - hadoop supergroup          0 2019-08-03 09:50 /mumu
drwxr-xr-x   - hadoop supergroup          0 2019-08-03 09:50 /mumu/Tree
-rw-r--r--   3 hadoop supergroup        866 2019-08-03 09:50 /mumu/Tree/tree.txt

2. 查看fsimage文件【元数据信息】

  • [hadoop@master current]$ hdfs oiv -p XML -i fsimage_0000000000000000000 -o ~/fs00.xml

3. 查看edits_log文件指令【编辑日志】

  • [hadoop@master current]$ hdfs oev -p XML -i edits_0000000000000000001-0000000000000000002 -o ~/ed01-02.xml

4.fsimage文件的形成

  • [hadoop@master current]$ hdfs oiv -p XML -i fsimage_0000000000000000000 -o ~/fs00.xml
<?xml version="1.0"?>
<fsimage>
	<NameSection>
		<genstampV1>1000</genstampV1>
		<genstampV2>1000</genstampV2>
		<genstampV1Limit>0</genstampV1Limit>
		<lastAllocatedBlockId>1073741824</lastAllocatedBlockId>【最后分配的块ID】
		<txid>0</txid>【事务ID为0】
	</NameSection>
	
	<INodeSection>
		<lastInodeId>16385</lastInodeId>
		<inode>
			<id>16385</id>【块池内节点ID】
			<type>DIRECTORY</type>【文件类型】
			<name></name>【name为空,根目录】
			<mtime>0</mtime>【创建时间】
			<permission>hadoop:supergroup:rwxr-xr-x</permission>【权限】
			<nsquota>9223372036854775807</nsquota>【文件限额】
			<dsquota>-1</dsquota>【空间配额】
		</inode>
	</INodeSection>
	
	<INodeReferenceSection></INodeReferenceSection>
	
	<SnapshotSection>
		<snapshotCounter>0</snapshotCounter>
	</SnapshotSection>
	
	<INodeDirectorySection></INodeDirectorySection>
	
	<FileUnderConstructionSection></FileUnderConstructionSection>
	
	<SnapshotDiffSection>
		<diff>
			<inodeid>16385</inodeid>
		</diff>
	</SnapshotDiffSection>
	
	<SecretManagerSection>
		<currentId>0</currentId>
		<tokenSequenceNumber>0</tokenSequenceNumber>
	</SecretManagerSection>
	
	<CacheManagerSection>
		<nextDirectiveId>1</nextDirectiveId>【下一个块池内节点ID】
	</CacheManagerSection>
</fsimage>
  • [hadoop@master current]$ hdfs oev -p XML -i edits_0000000000000000001-0000000000000000002 -o ~/ed01-02.xml
<?xml version="1.0" encoding="UTF-8"?>
<EDITS>
	<EDITS_VERSION>-63</EDITS_VERSION>【编辑日志版本】
	<RECORD>【每个操作记录】
		<OPCODE>OP_START_LOG_SEGMENT</OPCODE>【开启日志记录】
		<DATA>
			<TXID>1</TXID>【事务ID为1】
		</DATA>
	</RECORD>
	<RECORD>
		<OPCODE>OP_END_LOG_SEGMENT</OPCODE>【关闭日志记录】
		<DATA>
			<TXID>2</TXID>【事务ID为2】
		</DATA>
	</RECORD>
</EDITS>
【日志记录的开启与关闭都占用一个事务ID】
  • hadoop@master current]$ hdfs oiv -p XML -i fsimage_0000000000000000002 -o ~/fs02.xml
<?xml version="1.0"?>
<fsimage>
	<NameSection>
		<genstampV1>1000</genstampV1>
		<genstampV2>1000</genstampV2>
		<genstampV1Limit>0</genstampV1Limit>
		<lastAllocatedBlockId>1073741824</lastAllocatedBlockId>
		<txid>0</txid>
	</NameSection>
	
	<INodeSection>
		<lastInodeId>16385</lastInodeId>
		<inode>
			<id>16385</id>
			<type>DIRECTORY</type>
			<name></name>
			<mtime>0</mtime>
			<permission>hadoop:supergroup:rwxr-xr-x</permission>
			<nsquota>9223372036854775807</nsquota>
			<dsquota>-1</dsquota>
		</inode>
	</INodeSection>
	
	<INodeReferenceSection></INodeReferenceSection>
	
	<SnapshotSection>
		<snapshotCounter>0</snapshotCounter>
	</SnapshotSection>
	
	<INodeDirectorySection></INodeDirectorySection>
	
	<FileUnderConstructionSection></FileUnderConstructionSection>
	
	<SnapshotDiffSection>
		<diff>
			<inodeid>16385</inodeid>
		</diff>
	</SnapshotDiffSection>
	
	<SecretManagerSection>
		<currentId>0</currentId>
		<tokenSequenceNumber>0</tokenSequenceNumber>
	</SecretManagerSection>
	
	<CacheManagerSection>
		<nextDirectiveId>1</nextDirectiveId>
	</CacheManagerSection>
</fsimage>
  • [hadoop@master current]$ hdfs oev -p XML -i edits_0000000000000000003-0000000000000000019 -o ~/ed03-19.xml
<?xml version="1.0" encoding="UTF-8"?>
<EDITS>
  <EDITS_VERSION>-63</EDITS_VERSION>
  <RECORD>
    <OPCODE>OP_START_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>3</TXID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_MKDIR</OPCODE>【创建目录操作】
    <DATA>
      <TXID>4</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16386</INODEID>
      <PATH>/mumu</PATH>【目录路径】
      <TIMESTAMP>1564799843534</TIMESTAMP>【创建或修改的时间戳】
      <PERMISSION_STATUS>【权限】
        <USERNAME>hadoop</USERNAME>【用户名】
        <GROUPNAME>supergroup</GROUPNAME>【用户组名】
        <MODE>493</MODE>【读写权限】
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_MKDIR</OPCODE>
    <DATA>
      <TXID>5</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16387</INODEID>
      <PATH>/public</PATH>
      <TIMESTAMP>1564799851083</TIMESTAMP>
      <PERMISSION_STATUS>
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>493</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD</OPCODE>【为分布式文件系统存入数据操作】
    <DATA>
      <TXID>6</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16388</INODEID>
      <PATH>/Hello.java._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>【副本数为3】
      <MTIME>1564799869126</MTIME>【创建时间】
      <ATIME>1564799869126</ATIME>【修改时间】
      <BLOCKSIZE>134217728</BLOCKSIZE>【块大小】
      <CLIENT_NAME>DFSClient_NONMAPREDUCE_1063291247_1</CLIENT_NAME>【未使用MapReduce处理】
      <CLIENT_MACHINE>192.168.204.204</CLIENT_MACHINE>【由哪个客户端上传】
      <OVERWRITE>true</OVERWRITE>【是否可覆盖】
      <PERMISSION_STATUS>【权限状态】
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
      <RPC_CLIENTID>0fb00623-1d6f-454d-9e29-e5caed41f683</RPC_CLIENTID>【RPC客户端ID】
      <RPC_CALLID>3</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>【分配块ID】
    <DATA>
      <TXID>7</TXID>
      <BLOCK_ID>1073741825</BLOCK_ID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_SET_GENSTAMP_V2</OPCODE>【分配时间戳】
    <DATA>
      <TXID>8</TXID>
      <GENSTAMPV2>1001</GENSTAMPV2>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD_BLOCK</OPCODE>【添加一个数据块】
    <DATA>
      <TXID>9</TXID>
      <PATH>/Hello.java._COPYING_</PATH>
      <BLOCK>
        <BLOCK_ID>1073741825</BLOCK_ID>
        <NUM_BYTES>0</NUM_BYTES>
        <GENSTAMP>1001</GENSTAMP>
      </BLOCK>
      <RPC_CLIENTID></RPC_CLIENTID>
      <RPC_CALLID>-2</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_CLOSE</OPCODE>【关闭】
    <DATA>
      <TXID>10</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>0</INODEID>
      <PATH>/Hello.java._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>
      <MTIME>1564799870336</MTIME>
      <ATIME>1564799869126</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>【块大小128M,只是一个属性】
      <CLIENT_NAME></CLIENT_NAME>
      <CLIENT_MACHINE></CLIENT_MACHINE>
      <OVERWRITE>false</OVERWRITE>
      <BLOCK>
        <BLOCK_ID>1073741825</BLOCK_ID>
        <NUM_BYTES>106</NUM_BYTES>【这个才是数据大小】
        <GENSTAMP>1001</GENSTAMP>
      </BLOCK>
      <PERMISSION_STATUS>
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_RENAME_OLD</OPCODE>【重命名操作】
    <DATA>
      <TXID>11</TXID>
      <LENGTH>0</LENGTH>
      <SRC>/Hello.java._COPYING_</SRC>
      <DST>/Hello.java</DST>
      <TIMESTAMP>1564799870345</TIMESTAMP>
      <RPC_CLIENTID>0fb00623-1d6f-454d-9e29-e5caed41f683</RPC_CLIENTID>
      <RPC_CALLID>9</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_MKDIR</OPCODE>
    <DATA>
      <TXID>12</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16389</INODEID>
      <PATH>/mumu/Tree</PATH>
      <TIMESTAMP>1564799883850</TIMESTAMP>
      <PERMISSION_STATUS>
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>493</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD</OPCODE>
    <DATA>
      <TXID>13</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16390</INODEID>
      <PATH>/mumu/Tree/tree.txt._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>
      <MTIME>1564799883912</MTIME>
      <ATIME>1564799883912</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME>DFSClient_NONMAPREDUCE_-104925683_1</CLIENT_NAME>
      <CLIENT_MACHINE>192.168.204.204</CLIENT_MACHINE>
      <OVERWRITE>true</OVERWRITE>
      <PERMISSION_STATUS>
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
      <RPC_CLIENTID>53d826d9-05b8-4cc8-87ea-e3f46226f50a</RPC_CLIENTID>
      <RPC_CALLID>7</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
    <DATA>
      <TXID>14</TXID>
      <BLOCK_ID>1073741826</BLOCK_ID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
    <DATA>
      <TXID>15</TXID>
      <GENSTAMPV2>1002</GENSTAMPV2>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD_BLOCK</OPCODE>
    <DATA>
      <TXID>16</TXID>
      <PATH>/mumu/Tree/tree.txt._COPYING_</PATH>
      <BLOCK>
        <BLOCK_ID>1073741826</BLOCK_ID>
        <NUM_BYTES>0</NUM_BYTES>
        <GENSTAMP>1002</GENSTAMP>
      </BLOCK>
      <RPC_CLIENTID></RPC_CLIENTID>
      <RPC_CALLID>-2</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_CLOSE</OPCODE>
    <DATA>
      <TXID>17</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>0</INODEID>
      <PATH>/mumu/Tree/tree.txt._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>
      <MTIME>1564799884244</MTIME>
      <ATIME>1564799883912</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME></CLIENT_NAME>
      <CLIENT_MACHINE></CLIENT_MACHINE>
      <OVERWRITE>false</OVERWRITE>
      <BLOCK>
        <BLOCK_ID>1073741826</BLOCK_ID>
        <NUM_BYTES>866</NUM_BYTES>
        <GENSTAMP>1002</GENSTAMP>
      </BLOCK>
      <PERMISSION_STATUS>
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_RENAME_OLD</OPCODE>
    <DATA>
      <TXID>18</TXID>
      <LENGTH>0</LENGTH>
      <SRC>/mumu/Tree/tree.txt._COPYING_</SRC>
      <DST>/mumu/Tree/tree.txt</DST>
      <TIMESTAMP>1564799884253</TIMESTAMP>
      <RPC_CLIENTID>53d826d9-05b8-4cc8-87ea-e3f46226f50a</RPC_CLIENTID>
      <RPC_CALLID>12</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_DELETE</OPCODE>【删除操作】
    <DATA>
      <TXID>19</TXID>
      <LENGTH>0</LENGTH>
      <PATH>/public</PATH>
      <TIMESTAMP>1564799905990</TIMESTAMP>
      <RPC_CLIENTID>fa4bb7a4-9620-4a1d-af2b-402eda4999fb</RPC_CLIENTID>
      <RPC_CALLID>3</RPC_CALLID>
    </DATA>
  </RECORD>
</EDITS>
  • [hadoop@master current]$ hdfs oev -p XML -i edits_0000000000000000020-0000000000000000021 -o ~/ed20-21.xml
<?xml version="1.0" encoding="UTF-8"?>
<EDITS>
  <EDITS_VERSION>-63</EDITS_VERSION>
  <RECORD>
    <OPCODE>OP_START_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>20</TXID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_END_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>21</TXID>
    </DATA>
  </RECORD>
</EDITS>

  • [hadoop@master current]$ hdfs oiv -p XML -i fsimage_0000000000000000021 -o ~/fs21.xml
<?xml version="1.0"?>
<fsimage>
	<NameSection>
		<genstampV1>1000</genstampV1>
		<genstampV2>1002</genstampV2>
		<genstampV1Limit>0</genstampV1Limit>
		<lastAllocatedBlockId>1073741826</lastAllocatedBlockId>
		<txid>21</txid>
	</NameSection>
	
	<INodeSection>
		<lastInodeId>16390</lastInodeId>
		<inode>
			<id>16385</id>
			<type>DIRECTORY</type>
			<name></name>
			<mtime>1564799905990</mtime>
			<permission>hadoop:supergroup:rwxr-xr-x</permission>
			<nsquota>9223372036854775807</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		
		<inode>
			<id>16386</id>
			<type>DIRECTORY</type>
			<name>mumu</name>
			<mtime>1564799883850</mtime>
			<permission>hadoop:supergroup:rwxr-xr-x</permission>
			<nsquota>-1</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		
		<inode>
			<id>16388</id>
			<type>FILE</type>【文件类型】
			<name>Hello.java</name>
			<replication>3</replication>
			<mtime>1564799870336</mtime>
			<atime>1564799869126</atime>
			<perferredBlockSize>134217728</perferredBlockSize>【前一个块位置】
			<permission>hadoop:supergroup:rw-r--r--</permission>
			<blocks>
				<block>
					<id>1073741825</id>
					<genstamp>1001</genstamp>
					<numBytes>106</numBytes>【文件大小】
				</block>
			</blocks>
		</inode>
		
		<inode>
			<id>16389</id>
			<type>DIRECTORY</type>
			<name>Tree</name>
			<mtime>1564799884253</mtime>
			<permission>hadoop:supergroup:rwxr-xr-x</permission>
			<nsquota>-1</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		
		<inode>
			<id>16390</id>
			<type>FILE</type>
			<name>tree.txt</name>
			<replication>3</replication>
			<mtime>1564799884244</mtime>
			<atime>1564799883912</atime>
			<perferredBlockSize>134217728</perferredBlockSize>
			<permission>hadoop:supergroup:rw-r--r--</permission>
			<blocks>
				<block>
				<id>1073741826</id>
				<genstamp>1002</genstamp>
				<numBytes>866</numBytes>
				</block>
			</blocks>
		</inode>
	</INodeSection>
	
	<INodeReferenceSection></INodeReferenceSection>
	
	<SnapshotSection>
		<snapshotCounter>0</snapshotCounter>
	</SnapshotSection>
	
	<INodeDirectorySection>
		<directory>
			<parent>16385</parent>
			<inode>16388</inode>
			<inode>16386</inode>
		</directory>
		
		<directory>
			<parent>16386</parent>
			<inode>16389</inode>
		</directory>
		
		<directory>
			<parent>16389</parent>
			<inode>16390</inode>
		</directory>
	</INodeDirectorySection>
	
	<FileUnderConstructionSection></FileUnderConstructionSection>
	
	<SnapshotDiffSection>
		<diff>
			<inodeid>16385</inodeid>
		</diff>
	</SnapshotDiffSection>
	
	<SecretManagerSection>
		<currentId>0</currentId>
		<tokenSequenceNumber>0</tokenSequenceNumber>
	</SecretManagerSection>
	
	<CacheManagerSection>
		<nextDirectiveId>1</nextDirectiveId>
	</CacheManagerSection>
</fsimage>
  • edits_0000000000000000022-0000000000000000023 没有做什么操作,

    只是开启事务和关闭事务

  • fsimage_0000000000000000023 与 fsimage_0000000000000000021 没差别

  • [hadoop@master current]$ hdfs oev -p XML -i edits_0000000000000000024-0000000000000000034 -o ~/ed24-34.xml

<?xml version="1.0" encoding="UTF-8"?>
<EDITS>
  <EDITS_VERSION>-63</EDITS_VERSION>
  <RECORD>
    <OPCODE>OP_START_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>24</TXID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD</OPCODE>
    <DATA>
      <TXID>25</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16391</INODEID>
      <PATH>/mumu/hadoop.zip._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>
      <MTIME>1564820654715</MTIME>
      <ATIME>1564820654715</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME>DFSClient_NONMAPREDUCE_575173787_1</CLIENT_NAME>
      <CLIENT_MACHINE>192.168.204.204</CLIENT_MACHINE>
      <OVERWRITE>true</OVERWRITE>
      <PERMISSION_STATUS>
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
      <RPC_CLIENTID>a9e41525-a6da-4ce4-85a9-4fe463520565</RPC_CLIENTID>
      <RPC_CALLID>3</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
    <DATA>
      <TXID>26</TXID>
      <BLOCK_ID>1073741827</BLOCK_ID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
    <DATA>
      <TXID>27</TXID>
      <GENSTAMPV2>1003</GENSTAMPV2>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD_BLOCK</OPCODE>
    <DATA>
      <TXID>28</TXID>
      <PATH>/mumu/hadoop.zip._COPYING_</PATH>
      <BLOCK>
        <BLOCK_ID>1073741827</BLOCK_ID>
        <NUM_BYTES>0</NUM_BYTES>
        <GENSTAMP>1003</GENSTAMP>
      </BLOCK>
      <RPC_CLIENTID></RPC_CLIENTID>
      <RPC_CALLID>-2</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
    <DATA>
      <TXID>29</TXID>
      <BLOCK_ID>1073741828</BLOCK_ID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
    <DATA>
      <TXID>30</TXID>
      <GENSTAMPV2>1004</GENSTAMPV2>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD_BLOCK</OPCODE>
    <DATA>
      <TXID>31</TXID>
      <PATH>/mumu/hadoop.zip._COPYING_</PATH>
      <BLOCK>
        <BLOCK_ID>1073741827</BLOCK_ID>
        <NUM_BYTES>134217728</NUM_BYTES>
        <GENSTAMP>1003</GENSTAMP>
      </BLOCK>
      <BLOCK>
        <BLOCK_ID>1073741828</BLOCK_ID>
        <NUM_BYTES>0</NUM_BYTES>
        <GENSTAMP>1004</GENSTAMP>
      </BLOCK>
      <RPC_CLIENTID></RPC_CLIENTID>
      <RPC_CALLID>-2</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_CLOSE</OPCODE>
    <DATA>
      <TXID>32</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>0</INODEID>
      <PATH>/mumu/hadoop.zip._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>
      <MTIME>1564820688868</MTIME>
      <ATIME>1564820654715</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME></CLIENT_NAME>
      <CLIENT_MACHINE></CLIENT_MACHINE>
      <OVERWRITE>false</OVERWRITE>
      <BLOCK>【大于128M的文件,被分为2个块】
        <BLOCK_ID>1073741827</BLOCK_ID>
        <NUM_BYTES>134217728</NUM_BYTES>
        <GENSTAMP>1003</GENSTAMP>
      </BLOCK>
      <BLOCK>
        <BLOCK_ID>1073741828</BLOCK_ID>
        <NUM_BYTES>87469692</NUM_BYTES>
        <GENSTAMP>1004</GENSTAMP>
      </BLOCK>
      <PERMISSION_STATUS>
        <USERNAME>hadoop</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_RENAME_OLD</OPCODE>
    <DATA>
      <TXID>33</TXID>
      <LENGTH>0</LENGTH>
      <SRC>/mumu/hadoop.zip._COPYING_</SRC>
      <DST>/mumu/hadoop.zip</DST>
      <TIMESTAMP>1564820688985</TIMESTAMP>
      <RPC_CLIENTID>a9e41525-a6da-4ce4-85a9-4fe463520565</RPC_CLIENTID>
      <RPC_CALLID>10</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_END_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>34</TXID>
    </DATA>
  </RECORD>
</EDITS>
  • [hadoop@master current]$ hdfs oiv -p XML -i fsimage_0000000000000000034 -o ~/fs34.xml
<?xml version="1.0"?>
<fsimage>
	<NameSection>
		<genstampV1>1000</genstampV1>
		<genstampV2>1004</genstampV2>
		<genstampV1Limit>0</genstampV1Limit>
		<lastAllocatedBlockId>1073741828</lastAllocatedBlockId>
		<txid>34</txid>
	</NameSection>
	
	<INodeSection>
		<lastInodeId>16391</lastInodeId>
		<inode>
			<id>16385</id>
			<type>DIRECTORY</type>
			<name></name>
			<mtime>1564799905990</mtime>
			<permission>hadoop:supergroup:rwxr-xr-x</permission>
			<nsquota>9223372036854775807</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		
		<inode>
			<id>16386</id>
			<type>DIRECTORY</type>
			<name>mumu</name>
			<mtime>1564820688985</mtime>
			<permission>hadoop:supergroup:rwxr-xr-x</permission>
			<nsquota>-1</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		
		<inode>
			<id>16388</id>
			<type>FILE</type>
			<name>Hello.java</name>
			<replication>3</replication>
			<mtime>1564799870336</mtime>
			<atime>1564799869126</atime>
			<perferredBlockSize>134217728</perferredBlockSize>
			<permission>hadoop:supergroup:rw-r--r--</permission>
			<blocks>
				<block>
					<id>1073741825</id>
					<genstamp>1001</genstamp>
					<numBytes>106</numBytes>
				</block>
			</blocks>
		</inode>
		<inode>
			<id>16389</id>
			<type>DIRECTORY</type>
			<name>Tree</name>
			<mtime>1564799884253</mtime>
			<permission>hadoop:supergroup:rwxr-xr-x</permission>
			<nsquota>-1</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		<inode>
			<id>16390</id>
			<type>FILE</type>
			<name>tree.txt</name>
			<replication>3</replication>
			<mtime>1564799884244</mtime>
			<atime>1564799883912</atime>
			<perferredBlockSize>134217728</perferredBlockSize>
			<permission>hadoop:supergroup:rw-r--r--</permission>
			<blocks>
				<block>
					<id>1073741826</id>
					<genstamp>1002</genstamp>
					<numBytes>866</numBytes>
				</block>
			</blocks>
		</inode>
		<inode>
			<id>16391</id>
			<type>FILE</type>
			<name>hadoop.zip</name>
			<replication>3</replication>
			<mtime>1564820688868</mtime>
			<atime>1564820654715</atime>
			<perferredBlockSize>134217728</perferredBlockSize>
			<permission>hadoop:supergroup:rw-r--r--</permission>
			<blocks>
				<block>
					<id>1073741827</id>
					<genstamp>1003</genstamp>
					<numBytes>134217728</numBytes>
				</block>
				<block>
					<id>1073741828</id>
					<genstamp>1004</genstamp>
					<numBytes>87469692</numBytes>
				</block>
			</blocks>
		</inode>
	</INodeSection>
	
	<INodeReferenceSection></INodeReferenceSection>
	<SnapshotSection>
		<snapshotCounter>0</snapshotCounter>
	</SnapshotSection>

	<INodeDirectorySection>
		<directory>
			<parent>16385</parent>
			<inode>16388</inode>
			<inode>16386</inode>	
		</directory>
		<directory>
			<parent>16386</parent>
			<inode>16389</inode>
			<inode>16391</inode>
		</directory>
		<directory>
			<parent>16389</parent>
			<inode>16390</inode>
		</directory>
	</INodeDirectorySection>
	
	<FileUnderConstructionSection></FileUnderConstructionSection>

	<SnapshotDiffSection>
		<diff>
			<inodeid>16385</inodeid>
		</diff>
	</SnapshotDiffSection>
	
	<SecretManagerSection>
		<currentId>0</currentId>
		<tokenSequenceNumber>0</tokenSequenceNumber>
	</SecretManagerSection>
	
	<CacheManagerSection>
		<nextDirectiveId>1</nextDirectiveId>
	</CacheManagerSection>
</fsimage>
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值