root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# mkdir input
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# cp conf/*.xml input
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
Exception in thread "main" java.io.IOException: Error opening job jar: hadoop-*-examples.jar
at org.apache.hadoop.util.RunJar.main(RunJar.java:90)
Caused by: java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:131)
at java.util.jar.JarFile.<init>(JarFile.java:150)
at java.util.jar.JarFile.<init>(JarFile.java:87)
at org.apache.hadoop.util.RunJar.main(RunJar.java:88)
ls下,官方给的命令后面没有版本号,而我本地的需要版本号,加上如下
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'
11/05/22 11:26:37 INFO mapred.FileInputFormat: Total input paths to process : 6
11/05/22 11:26:38 INFO mapred.JobClient: Running job: job_local_0001
11/05/22 11:26:38 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:38 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:38 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:38 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:38 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:38 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
11/05/22 11:26:39 INFO mapred.JobClient: map 0% reduce 0%
11/05/22 11:26:41 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/capacity-scheduler.xml:0+7457
11/05/22 11:26:41 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
11/05/22 11:26:41 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:41 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:41 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:41 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:41 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:41 INFO mapred.MapTask: Finished spill 0
11/05/22 11:26:41 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
11/05/22 11:26:42 INFO mapred.JobClient: map 100% reduce 0%
11/05/22 11:26:44 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/hadoop-policy.xml:0+4644
11/05/22 11:26:44 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
11/05/22 11:26:44 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:44 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:44 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:44 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:44 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:44 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
11/05/22 11:26:47 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/mapred-queue-acls.xml:0+2033
11/05/22 11:26:47 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
11/05/22 11:26:47 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:47 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:47 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:47 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:47 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:47 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
11/05/22 11:26:50 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/hdfs-site.xml:0+178
11/05/22 11:26:50 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
11/05/22 11:26:50 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:50 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:50 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:50 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:50 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:50 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
11/05/22 11:26:53 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/core-site.xml:0+178
11/05/22 11:26:53 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
11/05/22 11:26:53 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:53 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:53 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:53 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:53 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:53 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
11/05/22 11:26:56 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/mapred-site.xml:0+178
11/05/22 11:26:56 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
11/05/22 11:26:56 INFO mapred.LocalJobRunner:
11/05/22 11:26:56 INFO mapred.Merger: Merging 6 sorted segments
11/05/22 11:26:56 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/05/22 11:26:56 INFO mapred.LocalJobRunner:
11/05/22 11:26:56 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
11/05/22 11:26:56 INFO mapred.LocalJobRunner:
11/05/22 11:26:56 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
11/05/22 11:26:56 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449
11/05/22 11:26:59 INFO mapred.LocalJobRunner: reduce > reduce
11/05/22 11:26:59 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
11/05/22 11:27:00 INFO mapred.JobClient: map 100% reduce 100%
11/05/22 11:27:00 INFO mapred.JobClient: Job complete: job_local_0001
11/05/22 11:27:00 INFO mapred.JobClient: Counters: 17
11/05/22 11:27:00 INFO mapred.JobClient: File Input Format Counters
11/05/22 11:27:00 INFO mapred.JobClient: Bytes Read=14668
11/05/22 11:27:00 INFO mapred.JobClient: File Output Format Counters
11/05/22 11:27:00 INFO mapred.JobClient: Bytes Written=123
11/05/22 11:27:00 INFO mapred.JobClient: FileSystemCounters
11/05/22 11:27:00 INFO mapred.JobClient: FILE_BYTES_READ=1108835
11/05/22 11:27:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1232836
11/05/22 11:27:00 INFO mapred.JobClient: Map-Reduce Framework
11/05/22 11:27:00 INFO mapred.JobClient: Map output materialized bytes=55
11/05/22 11:27:00 INFO mapred.JobClient: Map input records=357
11/05/22 11:27:00 INFO mapred.JobClient: Reduce shuffle bytes=0
11/05/22 11:27:00 INFO mapred.JobClient: Spilled Records=2
11/05/22 11:27:00 INFO mapred.JobClient: Map output bytes=17
11/05/22 11:27:00 INFO mapred.JobClient: Map input bytes=14668
11/05/22 11:27:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=713
11/05/22 11:27:00 INFO mapred.JobClient: Combine input records=1
11/05/22 11:27:00 INFO mapred.JobClient: Reduce input records=1
11/05/22 11:27:00 INFO mapred.JobClient: Reduce input groups=1
11/05/22 11:27:00 INFO mapred.JobClient: Combine output records=1
11/05/22 11:27:00 INFO mapred.JobClient: Reduce output records=1
11/05/22 11:27:00 INFO mapred.JobClient: Map output records=1
11/05/22 11:27:00 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/05/22 11:27:00 INFO mapred.FileInputFormat: Total input paths to process : 1
11/05/22 11:27:00 INFO mapred.JobClient: Running job: job_local_0002
11/05/22 11:27:00 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:27:00 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:27:00 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:27:00 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:27:00 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:27:00 INFO mapred.MapTask: Finished spill 0
11/05/22 11:27:00 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
11/05/22 11:27:01 INFO mapred.JobClient: map 0% reduce 0%
11/05/22 11:27:03 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449/part-00000:0+111
11/05/22 11:27:03 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449/part-00000:0+111
11/05/22 11:27:03 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
11/05/22 11:27:03 INFO mapred.LocalJobRunner:
11/05/22 11:27:03 INFO mapred.Merger: Merging 1 sorted segments
11/05/22 11:27:03 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/05/22 11:27:03 INFO mapred.LocalJobRunner:
11/05/22 11:27:03 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
11/05/22 11:27:03 INFO mapred.LocalJobRunner:
11/05/22 11:27:03 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
11/05/22 11:27:03 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/lidongbo/soft/hadoop-0.20.203.0/output
11/05/22 11:27:04 INFO mapred.JobClient: map 100% reduce 0%
11/05/22 11:27:06 INFO mapred.LocalJobRunner: reduce > reduce
11/05/22 11:27:06 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
11/05/22 11:27:07 INFO mapred.JobClient: map 100% reduce 100%
11/05/22 11:27:07 INFO mapred.JobClient: Job complete: job_local_0002
11/05/22 11:27:07 INFO mapred.JobClient: Counters: 17
11/05/22 11:27:07 INFO mapred.JobClient: File Input Format Counters
11/05/22 11:27:07 INFO mapred.JobClient: Bytes Read=123
11/05/22 11:27:07 INFO mapred.JobClient: File Output Format Counters
11/05/22 11:27:07 INFO mapred.JobClient: Bytes Written=23
11/05/22 11:27:07 INFO mapred.JobClient: FileSystemCounters
11/05/22 11:27:07 INFO mapred.JobClient: FILE_BYTES_READ=607997
11/05/22 11:27:07 INFO mapred.JobClient: FILE_BYTES_WRITTEN=701437
11/05/22 11:27:07 INFO mapred.JobClient: Map-Reduce Framework
11/05/22 11:27:07 INFO mapred.JobClient: Map output materialized bytes=25
11/05/22 11:27:07 INFO mapred.JobClient: Map input records=1
11/05/22 11:27:07 INFO mapred.JobClient: Reduce shuffle bytes=0
11/05/22 11:27:07 INFO mapred.JobClient: Spilled Records=2
11/05/22 11:27:07 INFO mapred.JobClient: Map output bytes=17
11/05/22 11:27:07 INFO mapred.JobClient: Map input bytes=25
11/05/22 11:27:07 INFO mapred.JobClient: SPLIT_RAW_BYTES=127
11/05/22 11:27:07 INFO mapred.JobClient: Combine input records=0
11/05/22 11:27:07 INFO mapred.JobClient: Reduce input records=1
11/05/22 11:27:07 INFO mapred.JobClient: Reduce input groups=1
11/05/22 11:27:07 INFO mapred.JobClient: Combine output records=0
11/05/22 11:27:07 INFO mapred.JobClient: Reduce output records=1
11/05/22 11:27:07 INFO mapred.JobClient: Map output records=1
ubuntu 11 默认没有安装SSHD
使用sudo apt-get install openssh-server
然后确认sshserver是否启动了: ps -e |grep ssh 如果只有ssh-agent那ssh-server还没有启动,需要/etc/init.d/ssh start,如果看到sshd那说明ssh-server已经启动了。
ssh-server配置文件位于/ etc/ssh/sshd_config,在这里可以定义SSH的服务端口,默认端口是22,你可以自己定义成其他端口号,如222。
然后重启SSH服务: sudo /etc/init.d/ssh restart
root@tiger:/etc# apt-get install openssh-server
正在读取软件包列表... 完成
正在分析软件包的依赖关系树
正在读取状态信息... 完成
将会安装下列额外的软件包:
ssh-import-id
建议安装的软件包:
rssh molly-guard openssh-blacklist openssh-blacklist-extra
下列【新】软件包将被安装:
openssh-server ssh-import-id
升级了 0 个软件包,新安装了 2 个软件包,要卸载 0 个软件包,有 109 个软件包未被升级。
需要下载 317 kB 的软件包。
解压缩后会消耗掉 913 kB 的额外空间。
您希望继续执行吗?[Y/n]y
获取:1 http://cn.archive.ubuntu.com/ubuntu/ natty/main openssh-server i386 1:5.8p1-1ubuntu3 [311 kB]
获取:2 http://cn.archive.ubuntu.com/ubuntu/ natty/main ssh-import-id all 2.4-0ubuntu1 [5,934 B]
下载 317 kB,耗时 2秒 (144 kB/s)
正在预设定软件包 ...
选中了曾被取消选择的软件包 openssh-server。
(正在读取数据库 ... 系统当前共安装有 134010 个文件和目录。)
正在解压缩 openssh-server (从 .../openssh-server_1%3a5.8p1-1ubuntu3_i386.deb) ...
选中了曾被取消选择的软件包 ssh-import-id。
正在解压缩 ssh-import-id (从 .../ssh-import-id_2.4-0ubuntu1_all.deb) ...
正在处理用于 ureadahead 的触发器...
ureadahead will be reprofiled on next reboot
正在处理用于 ufw 的触发器...
正在处理用于 man-db 的触发器...
正在设置 openssh-server (1:5.8p1-1ubuntu3) ...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
ssh start/running, process 2396
正在设置 ssh-import-id (2.4-0ubuntu1) ...
root@tiger:/etc# sshd
sshd re-exec requires execution with an absolute path
root@tiger:/etc/init.d# ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
ECDSA key fingerprint is 72:0f:15:ff:d4:14:63:ab:6c:6e:5f:57:4b:5c:cf:dd.
Are you sure you want to continue connecting (yes/no)?
解决了 ssh :connect to host 127.0.0.1 port 22: Connection refused 问题