操作hdfs

20:55 2010-6-2

运行环境:

Hadoop.0.20.2

CentOS 5.4 

java version "1.6.0_20-ea"

配置的是单机Hadoop环境

先看下我的运行截图

 

主要参考这篇文章

http://myjavanotebook.blogspot.com/2008/05/hadoop-file-system-tutorial.html

 

1.Copy a file from the local file system to HDFS

The srcFile variable needs to contain the full name (path + file name) ofthe file in the local file system. 

The dstFile variable needs to contain the desired full name of the file inthe Hadoop file system.

 

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path srcPath = new Path(srcFile);
  Path dstPath = new Path(dstFile);
  hdfs.copyFromLocalFile(srcPath, dstPath);

 

2.Create HDFS file

The fileName variable contains the file name and path in the Hadoop filesystem. 

The content of the file is the buff variable which is an array of bytes.

//byte[] buff - The content of the file

  Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  FSDataOutputStream outputStream = hdfs.create(path);
  outputStream.write(buff, 0, buff.length);

 

3.Rename HDFS file

In order to rename a file in Hadoop file system, we need the full name(path + name) of 

the file we want to rename. The rename method returns true if the file wasrenamed, otherwise false.

 

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path fromPath = new Path(fromFileName);
  Path toPath = new Path(toFileName);
  boolean isRenamed = hdfs.rename(fromPath, toPath);

 

4.Delete HDFS file

In order to delete a file in Hadoop file system, we need the full name(path + name) 

of the file we want to delete. The delete method returns true if the filewas deleted, otherwise false.

 

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  boolean isDeleted = hdfs.delete(path, false);

Recursive delete:
  Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  boolean isDeleted = hdfs.delete(path, true);

 

  

5.Get HDFS file last modification time

In order to get the last modification time of a file in Hadoop filesystem, 

we need the full name (path + name) of the file.

 

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  FileStatus fileStatus = hdfs.getFileStatus(path);
  long modificationTime = fileStatus.getModificationTime

 

  

 6.Check if a file exists in HDFS

In order to check the existance of a file in Hadoop file system, 

we need the full name (path + name) of the file we want to check. 

The exists methods returns true if the file exists, otherwise false.

 

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  boolean isExists = hdfs.exists(path);

 

  

 7.Get the locations of a file in the HDFS cluster

 A file can exist on more than one node in the Hadoop file systemcluster for two reasons:

Based on the HDFS cluster configuration, Hadoop saves parts of files ondifferent nodes in the cluster.

Based on the HDFS cluster configuration, Hadoop saves more than one copyof each file on different nodes for redundancy (The default is three).

 

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  FileStatus fileStatus = hdfs.getFileStatus(path);

  BlockLocation[] blkLocations = hdfs.getFileBlockLocations(path, 0, fileStatus.getLen());

BlockLocation[] blkLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
     //这个地方,作者写错了,需要把path改为fileStatus
  int blkCount = blkLocations.length;
  for (int i=0; i < blkCount; i++) {
    String[] hosts = blkLocations[i].getHosts();
    // Do something with the block hosts
   }

 

8. Get a list of all the nodes host names in the HDFS cluster

 

  his method casts the FileSystem Object to aDistributedFileSystem Object. 

  This method will work only when Hadoop is configured as a cluster. 

  Running Hadoop on the local machine only, in a non clusterconfiguration will

   cause this method to throw an Exception.

   

Configuration config = new Configuration();
  FileSystem fs = FileSystem.get(config);
  DistributedFileSystem hdfs = (DistributedFileSystem) fs;
  DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
  String[] names = new String[dataNodeStats.length];
  for (int i = 0; i < dataNodeStats.length; i++) {
      names[i] = dataNodeStats[i].getHostName();
  }

  

  

程序实例

 

/*
 * 
 * 演示操作HDFS的java接口
 * 
 * */


import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.hdfs.*;
import org.apache.hadoop.hdfs.protocol.*;
import java.util.Date;

public class DFSOperater {

    /**
     * @param args
     */
    public static void main(String[] args) {

        Configuration conf = new Configuration();
        
        try {
            // Get a list of all the nodes host names in the HDFS cluster

            FileSystem fs = FileSystem.get(conf);
            DistributedFileSystem hdfs = (DistributedFileSystem)fs;
            DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
            String[] names = new String[dataNodeStats.length];
            System.out.println("list of all the nodes in HDFS cluster:"); //print info

            for(int i=0; i < dataNodeStats.length; i++){
                names[i] = dataNodeStats[i].getHostName();
                System.out.println(names[i]); //print info

            }
            Path f = new Path("/user/cluster/dfs.txt");
            
            //check if a file exists in HDFS

            boolean isExists = fs.exists(f);
            System.out.println("The file exists? [" + isExists + "]");
            
            //if the file exist, delete it

            if(isExists){
                 boolean isDeleted = hdfs.delete(f, false);//fase : not recursive

                 if(isDeleted)System.out.println("now delete " + f.getName());                 
            }
            
            //create and write

            System.out.println("create and write [" + f.getName() + "] to hdfs:");
            FSDataOutputStream os = fs.create(f, true, 0);
            for(int i=0; i<10; i++){
                os.writeChars("test hdfs ");
            }
            os.writeChars("\n");
            os.close();
            
            //get the locations of a file in HDFS

            System.out.println("locations of file in HDFS:");
            FileStatus filestatus = fs.getFileStatus(f);
            BlockLocation[] blkLocations = fs.getFileBlockLocations(filestatus, 0,filestatus.getLen());
            int blkCount = blkLocations.length;
            for(int i=0; i < blkCount; i++){
                String[] hosts = blkLocations[i].getHosts();
                //Do sth with the block hosts

                System.out.println(hosts);
            }
            
            //get HDFS file last modification time

            long modificationTime = filestatus.getModificationTime(); // measured in milliseconds since the epoch

            Date d = new Date(modificationTime);
         System.out.println(d);
            //reading from HDFS

            System.out.println("read [" + f.getName() + "] from hdfs:");
     FSDataInputStream dis = fs.open(f);
     System.out.println(dis.readUTF());
     dis.close();

        } catch (Exception e) {
            // TODO: handle exception

            e.printStackTrace();
        }
                
    }

}

 

编译后拷贝到node1上面运行,杯具,不会用Eclipse插件

[cluster /opt/hadoop/source]$cp /opt/winxp/hadoop/dfs_operator.jar .
[cluster /opt/hadoop/source]$hadoop jar dfs_operator.jar DFSOperater
list of all the nodes in HDFS cluster:
node1
The file exists? [true]
now delete dfs.txt
create and write [dfs.txt] to hdfs:
locations of file in HDFS:
[Ljava.lang.String;@72ffb
Wed Jun 02 18:29:14 CST 2010
read [dfs.txt] from hdfs:
est hdfs test hdfs test hdfs test hdfs test hdfs test hdfs

 

运行成功!查看输出文件

[cluster /opt/hadoop/source]$hadoop fs -cat dfs.txt
test hdfs test hdfs test hdfs test hdfs test hdfs test hdfs test hdfs test hdfs test hdfs test hdfs

 

作者:qkshan@twitter,<myduanli@gmail.com> 
来源:http://duanli.cublog.cn
说明:转载请注明来源,交流请Email给作者

 

 

 

 

【SCI一区复现】基于配电网韧性提升的应急移动电源预配置和动态调度(下)—MPS动态调度(Matlab代码实现)内容概要:本文档围绕“基于配电网韧性提升的应急移动电源预配置和动态调度”主题,重点介绍MPS(Mobile Power Sources)动态调度的Matlab代码实现,是SCI一区论文复现的技术资料。内容涵盖在灾害或故障等极端场景下,如何通过优化算法对应急移动电源进行科学调度,以提升配电网在突发事件中的恢复能力与供电可靠性。文档强调采用先进的智能优化算法进行建模求解,并结合IEEE标准测试系统(如IEEE33节点)进行仿真验证,具有较强的学术前沿性和工程应用价值。; 适合人群:具备电力系统基础知识和Matlab编程能力,从事电力系统优化、配电网韧性、应急电源调度等相关领域研究的研究生、科研人员及工程技术人员。; 使用场景及目标:①用于复现高水平期刊(SCI一区、IEEE顶刊)中关于配电网韧性与移动电源调度的研究成果;②支撑科研项目中的模型构建与算法开发,提升配电网在故障后的快速恢复能力;③为电力系统应急调度策略提供仿真工具与技术参考。; 阅读建议:建议结合前篇“MPS预配置”内容系统学习,重点关注动态调度模型的数学建模、目标函数设计与Matlab代码实现细节,建议配合YALMIP等优化工具包进行仿真实验,并参考文中提供的网盘资源获取完整代码与数据。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值