HTTPFS:
core-site.xml
<property>
<name>hadoop.proxyuser.httpfs.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.groups</name>
<value>*</value>
</property>
service hadoop-httpfs start
curl 'http://10.205.151.148:14000/webhdfs/v1/tmp?user.name=apprun&op=DELETE' | python -m json.tool
https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html?_ga=2.150097843.402918135.1494227714-1902937800.1476434772
curl -i -X DELETE "http://10.205.151.148:50070/webhdfs/v1/user/etl/test.txt?user.name=etl&op=DELETE"
WEBHDFS:+kerberos
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.webhdfs.user.provider.user.pattern</name>
<value>^[A-Za-z0-9_][A-Za-z0-9._-]*[$]?$</value>
</property>
http://<active-namenode-server>:<namenode-port>/webhdfs/v1/<file-path>?op=OPEN
WebHDFS需要访问集群的所有节点,并且当读取某些数据时,它直接从该节点发送,而在HttpFs中,单个节点将作用类似于“网关”,并将作为单点数据传输到客户端节点。 因此,在大型文件传输过程中,HttpF可能会被阻塞,