Hadoop:HDFS API二次开发实验


在1号机中命令行:

su
输入密码

启动hdfs集群

start-dfs.sh

1.HDFS的读操作实验

到Client端:
创建一个文件

su
输入密码
vi /root/words.txt

输入:

hello world
this is a test file
for learning

将文件上传到hdfs服务器的根目录:

hdfs dfs -put /root/words.txt /

查看文件是否上传成功:

hdfs dfs -ls /

查看文件内容:

hdfs dfs -cat /words.txt

打开eclipse:

cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前RPC通信实验的rpc包中:(为了方便,也可以新建项目)
右键rpc包-New-Class
Name填HDFS_API
-Finish
写入:

package rpc;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
}

再新建一个测试类:testAPI,并在其中编写测试代码:
右键rpc包-New-Class
Name填testAPI
勾选main方法
-Finish
写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		hdfs.testRead();

	}

}

鼠标点到main方法中
Run As-1 Java Application
便读取出服务上要访问的hdfs中文件的内容:
在这里插入图片描述

2.HDFS的写操作实验

到Client端:
打开eclipse:

su
输入密码
cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前的HDFS_API.java中,再创建一个写方法:

package rpc;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
	
	public void testWrite() throws Exception {	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");	
		fs = FileSystem.get(conf); 	
		fos = fs.create(new Path("/words2.txt"));
		fos.writeChars("hello");
		fos.flush();
		fos.close();
	}  
	
}

在testAPI.java中:
注释掉之前的hdfs.testRead();
写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		// hdfs.testRead();
		hdfs.testWrite();
	}

}

新建一个命令行窗口

su
输入密码
hdfs dfs -ls /

(Client端可以直接访问HDFS是因为此虚拟机直接克隆的1号机)
可看到此时没有words2.txt文件

回到eclipse,鼠标点到main方法中
Run As-1 Java Application
便在服务器端创建了words2.txt文件,完成了写操作
在这里插入图片描述
再回到命令行窗口:

hdfs dfs -ls /

可看到服务器端有了words2.txt文件
并查看文件内容:

hdfs dfs -cat /words2.txt

可看到hello
在这里插入图片描述

3.HDFS的文件重命名操作实验

到Client端:
打开eclipse:

su
输入密码
cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前的HDFS_API.java中,再创建一个方法:

package rpc;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
	
	public void testWrite() throws Exception {	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");	
		fs = FileSystem.get(conf); 	
		fos = fs.create(new Path("/words2.txt"));
		fos.writeChars("hello");
		fos.flush();
		fos.close();
	}  
	
	public void testRename() throws Exception {	System.setProperty("HADOOP_USER_NAME", "root");
    //设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!	
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		boolean flag = fs.rename(path, new Path("/mywords.txt"));
		System.out.println(flag);
	}

}

在testAPI.java中:
注释掉之前的hdfs.testWrite();
写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		// hdfs.testRead();
		// hdfs.testWrite();
		hdfs.testRename();
	}

}

鼠标点到main方法中
Run As-1 Java Application
便完成了文件的重命名
在这里插入图片描述

在命令行窗口:

su
输入密码
hdfs dfs -ls /

可看到words.txt被重命名为mywords.txt
在这里插入图片描述

4.HDFS的列出路径下对应的文件集操作实验

到Client端:
打开eclipse:

su
输入密码
cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前的HDFS_API.java中,再创建一个方法:

package rpc;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
	
	public void testWrite() throws Exception {	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");	
		fs = FileSystem.get(conf); 	
		fos = fs.create(new Path("/words2.txt"));
		fos.writeChars("hello");
		fos.flush();
		fos.close();
	}  
	
	public void testRename() throws Exception {	System.setProperty("HADOOP_USER_NAME", "root");
    //设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!	
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		boolean flag = fs.rename(path, new Path("/mywords.txt"));
		System.out.println(flag);
	}

	public void testListFiles() throws Exception { 
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/"); 
		RemoteIterator<LocatedFileStatus> list = fs.listFiles(path, true);
		while(list.hasNext()) {
			System.out.println(list.next());
		}
	}

}

在testAPI.java中:
注释掉之前的hdfs.testRename();
写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		// hdfs.testRead();
		// hdfs.testWrite();
		// hdfs.testRename();
		hdfs.testListFiles();
	}

}

鼠标点到main方法中
Run As-1 Java Application
便显示出包含文件的信息
在这里插入图片描述

5.HDFS的删除一个文件或目录操作实验

到Client端:
打开eclipse:

su
输入密码
cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前的HDFS_API.java中,再创建一个方法:

package rpc;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
	
	public void testWrite() throws Exception {	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");	
		fs = FileSystem.get(conf); 	
		fos = fs.create(new Path("/words2.txt"));
		fos.writeChars("hello");
		fos.flush();
		fos.close();
	}  
	
	public void testRename() throws Exception {	System.setProperty("HADOOP_USER_NAME", "root");
    //设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!	
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		boolean flag = fs.rename(path, new Path("/mywords.txt"));
		System.out.println(flag);
	}

	public void testListFiles() throws Exception { 
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/"); 
		RemoteIterator<LocatedFileStatus> list = fs.listFiles(path, true);
		while(list.hasNext()) {
			System.out.println(list.next());
		}
	}

	public void testDelete() throws Exception { 	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/mywords.txt"); 
		boolean flag = fs.delete(path);
		System.out.println(flag);
 	}

}

在testAPI.java中:
注释掉之前的hdfs.testListFiles();
写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		// hdfs.testRead();
		// hdfs.testWrite();
		// hdfs.testRename();
		// hdfs.testListFiles();
		hdfs.testDelete();
	}

}

鼠标点到main方法中
Run As-1 Java Application
mywords.txt文件被成功删除
在这里插入图片描述
在命令行窗口:

su
输入密码
hdfs dfs -ls /

可看到mywords.txt文件被删除
在这里插入图片描述

6.HDFS的创建目录操作实验

到Client端:
打开eclipse:

su
输入密码
cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前的HDFS_API.java中,再创建一个方法:

package rpc;

import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
	
	public void testWrite() throws Exception {	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");	
		fs = FileSystem.get(conf); 	
		fos = fs.create(new Path("/words2.txt"));
		fos.writeChars("hello");
		fos.flush();
		fos.close();
	}  
	
	public void testRename() throws Exception {	System.setProperty("HADOOP_USER_NAME", "root");
    //设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!	
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		boolean flag = fs.rename(path, new Path("/mywords.txt"));
		System.out.println(flag);
	}

	public void testListFiles() throws Exception { 
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/"); 
		RemoteIterator<LocatedFileStatus> list = fs.listFiles(path, true);
		while(list.hasNext()) {
			System.out.println(list.next());
		}
	}

	public void testDelete() throws Exception { 	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/mywords.txt"); 
		boolean flag = fs.delete(path);
		System.out.println(flag);
	}
	
	public void testMakedir() throws Exception {

		fs = FileSystem.get(new URI("hdfs://192.168.100.10:8020"), new Configuration(), "root");
		boolean flag = fs.mkdirs(new Path("/javaApi/mk/dir1/dir2"));
	        	System.out.println(flag ? "创建成功" : "创建失败");

	}


}

在testAPI.java中:
注释掉之前的hdfs.testDelete();
写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		// hdfs.testRead();
		// hdfs.testWrite();
		// hdfs.testRename();
		// hdfs.testListFiles();
		// hdfs.testDelete();
		hdfs.testMakedir();
	}

}

鼠标点到main方法中
Run As-1 Java Application
在这里插入图片描述
在命令行窗口:

su
输入密码
hdfs dfs -ls /
hdfs dfs -ls /javaApi
hdfs dfs -ls /javaApi/mk

可看到新建了javaApi目录以及其子目录
在这里插入图片描述

7.HDFS的上传文件操作实验

到Client端:

打开eclipse:

su
输入密码
cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前的HDFS_API.java中,再创建一个方法:

package rpc;

import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
	
	public void testWrite() throws Exception {	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");	
		fs = FileSystem.get(conf); 	
		fos = fs.create(new Path("/words2.txt"));
		fos.writeChars("hello");
		fos.flush();
		fos.close();
	}  
	
	public void testRename() throws Exception {	System.setProperty("HADOOP_USER_NAME", "root");
    //设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!	
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		boolean flag = fs.rename(path, new Path("/mywords.txt"));
		System.out.println(flag);
	}

	public void testListFiles() throws Exception { 
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/"); 
		RemoteIterator<LocatedFileStatus> list = fs.listFiles(path, true);
		while(list.hasNext()) {
			System.out.println(list.next());
		}
	}

	public void testDelete() throws Exception { 	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/mywords.txt"); 
		boolean flag = fs.delete(path);
		System.out.println(flag);
	}
	
	public void testMakedir() throws Exception {

		fs = FileSystem.get(new URI("hdfs://192.168.100.10:8020"), new Configuration(), "root");
		boolean flag = fs.mkdirs(new Path("/javaApi/mk/dir1/dir2"));
	        	System.out.println(flag ? "创建成功" : "创建失败");

	}

	public void testUpload() throws Exception {
		/*
		 * new URI("hdfs://192.168.100.10:8020"):连接HADOOP 的URL 
		 * new Configuration(): 使用HADOOP 默认配置 
		 * "root":登录用户
		 */
			conf = new Configuration();
			conf.setBoolean("dfs.support.append", true);
			fs = FileSystem.get(new URI("hdfs://192.168.100.10:8020"), conf, "root");
			String localsrc = "/HelloWorld.txt";  //如果该文件不存在,则先创建它即可
			String hdfsDst = "/javaApi/mk/";
			fs.copyFromLocalFile(new Path(localsrc), new Path(hdfsDst));
			System.out.println("upload sucess");
		}


}

在testAPI.java中:
注释掉之前的hdfs.testMakedir();
写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		// hdfs.testRead();
		// hdfs.testWrite();
		// hdfs.testRename();
		// hdfs.testListFiles();
		// hdfs.testDelete();
		// hdfs.testMakedir();
		hdfs.testUpload();
	}

}

在本地新建一个文件:

touch /HelloWorld.txt

鼠标点到main方法中
Run As-1 Java Application
在这里插入图片描述

在命令行窗口:

su
输入密码
hdfs dfs -ls /javaApi/mk

上传成功
在这里插入图片描述

8.HDFS的查看文件的元数据信息操作实验

到Client端:
打开eclipse:

su
输入密码
cd /usr/eclipse/
./eclipse

选择此路径下的Workspace:/root/workspace
在之前的HDFS_API.java中,再创建一个方法:

package rpc;

import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;

public class HDFS_API {
	Configuration conf;
	FileSystem fs;
	Path path;
	FSDataInputStream fis;
	FSDataOutputStream fos;
	FileStatus filestatus;
	
	public void testRead() throws Exception {
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		fis = fs.open(path); 
		byte b[] = new byte[200];
		int i = fis.read(b); 
		System.out.println(new String(b,0,i));
	}
	
	public void testWrite() throws Exception {	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");	
		fs = FileSystem.get(conf); 	
		fos = fs.create(new Path("/words2.txt"));
		fos.writeChars("hello");
		fos.flush();
		fos.close();
	}  
	
	public void testRename() throws Exception {	System.setProperty("HADOOP_USER_NAME", "root");
    //设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!	
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/words.txt"); 
		boolean flag = fs.rename(path, new Path("/mywords.txt"));
		System.out.println(flag);
	}

	public void testListFiles() throws Exception { 
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/"); 
		RemoteIterator<LocatedFileStatus> list = fs.listFiles(path, true);
		while(list.hasNext()) {
			System.out.println(list.next());
		}
	}

	public void testDelete() throws Exception { 	
		System.setProperty("HADOOP_USER_NAME", "root");
		//设置以root权限来运行该程序,否则小用户对HDFS的根目录是没有写权限的!
		conf = new Configuration(); 
		conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
		fs = FileSystem.get(conf); 
		path = new Path("/mywords.txt"); 
		boolean flag = fs.delete(path);
		System.out.println(flag);
	}
	
	public void testMakedir() throws Exception {

		fs = FileSystem.get(new URI("hdfs://192.168.100.10:8020"), new Configuration(), "root");
		boolean flag = fs.mkdirs(new Path("/javaApi/mk/dir1/dir2"));
	        	System.out.println(flag ? "创建成功" : "创建失败");

	}

	public void testUpload() throws Exception {
		/*
		 * new URI("hdfs://192.168.100.10:8020"):连接HADOOP 的URL 
		 * new Configuration(): 使用HADOOP 默认配置 
		 * "root":登录用户
		 */
			conf = new Configuration();
			conf.setBoolean("dfs.support.append", true);
			fs = FileSystem.get(new URI("hdfs://192.168.100.10:8020"), conf, "root");
			String localsrc = "/HelloWorld.txt";  //如果该文件不存在,则先创建它即可
			String hdfsDst = "/javaApi/mk/";
			fs.copyFromLocalFile(new Path(localsrc), new Path(hdfsDst));
			System.out.println("upload sucess");
		}

	public void getFileStatus() throws Exception {
		FileSystem fs = FileSystem.get(new URI("hdfs://192.168.100.10:8020"), 
			new Configuration(), "root");
		FileStatus filestatus = fs.getFileStatus(new Path("/words2.txt"));
		System.out.println(filestatus);
	        	System.out.println("文件路径:"+filestatus.getPath());
	        	System.out.println("块的大小:"+filestatus.getBlockSize());
	        	System.out.println("文件所有者:"+filestatus.getOwner()+":"+filestatus.getGroup());
	        	System.out.println("文件权限:"+filestatus.getPermission());
	        	System.out.println("文件长度:"+filestatus.getLen());
	        	System.out.println("备份数:"+filestatus.getReplication());
	        	System.out.println("修改时间:"+filestatus.getModificationTime());
	}

}

在testAPI.java中:
注释掉之前的,写入:

package rpc;

public class testAPI {

	public static void main(String[] args) throws Exception {
		// TODO Auto-generated method stub
		
		HDFS_API hdfs = new HDFS_API();
		// hdfs.testRead();
		// hdfs.testWrite();
		// hdfs.testRename();
		// hdfs.testListFiles();
		// hdfs.testDelete();
		// hdfs.testMakedir();
		// hdfs.testUpload();
		hdfs.getFileStatus();
	}

}

鼠标点到main方法中
Run As-1 Java Application
便获取了words2.txt文件的元数据信息
在这里插入图片描述

9.纯命令行模式下使用API实现读操作

到Client端:

9.1准备测试文件

新建两个文件夹文件夹

su
输入密码
mkdir /opt/modules/app/hadoop-2.8.5/myclass
mkdir /opt/modules/app/hadoop-2.8.5/input

新建测试文件

vi /opt/modules/app/hadoop-2.8.5/input/quangle.txt

输入:

On the top of the Crumpetty Tree 
The Quangle Wangle sat, 
But his face you could not see, 
On account of his Beaver Hat. 

将其上传到hdfs的根目录

hdfs dfs -copyFromLocal /opt/modules/app/hadoop-2.8.5/input/quangle.txt  /

9.2配置环境

vi /opt/modules/app/hadoop-2.8.5/etc/hadoop/hadoop-env.sh

然后在文件尾部追加如下内容:

HADOOP_HOME="/opt/modules/app/hadoop-2.8.5"
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_HOME/myclass

9.3编写程序

建立FileSystemCat.java 源代码文件:

vi /opt/modules/app/hadoop-2.8.5/myclass/HDFS_API.java

写入:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class HDFS_API {
        public static void main(String[] args) throws Exception {
                Configuration conf = new Configuration();
                conf.set("fs.defaultFS", "hdfs://192.168.100.10:8020/");
                FileSystem fs = FileSystem.get(conf);
                Path path = new Path(args[0]);
                FSDataInputStream fis = fs.open(path);
                byte b[] = new byte[200];
                int i = fis.read(b);
                System.out.println(new String(b,0,i));
        }
}

9.4编译程序

通过添加CLASSPATH环境变量实现:
向CLASSPATH中添加hadoop-common-2.8.5.jar包

vi /etc/profile

后几行变为:

JAVA_HOME=/opt/modules/jdk1.8.0_161
HADOOP_HOME="/opt/modules/app/hadoop-2.8.5/"
CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
CLASSPATH=$CLASSPATH:$HADOOP_HOME/share/hadoop/common/hadoop-common-2.8.5.jar
PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export JAVA_HOME HADOOP_HOME CLASSPATH PATH

使配置对所有用户生效:

source /etc/profile

编译源代码:

javac  /opt/modules/app/hadoop-2.8.5/myclass/HDFS_API.java 

查看

ll /opt/modules/app/hadoop-2.8.5/myclass/

可看到生成了HDFS_API.class文件

9.5运行我们自己写的程序(命令)实现对HDFS上的文件的读操作

hadoop  HDFS_API  /quangle.txt

显示写入的内容
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值