HADOOP的基础知识-1

你们使用的 hadoop 是什么环境什么版本的?

hadoop   开源版 2.8

hadoop   cdh 版本 5

 hadoop 有哪三大组件?
hdfs hadoop 的分布式文件管理系统
mapreduce : 数据的计算引擎
yarn : 资源管理和调度系统
hadoop 平台,你用过和知道哪些不同的组件?
离线的部分:sqoop yarn hdfs mapreduce hive
实时的部分:flume(日志信息的收集) kafka(消息队列的处理) hbase(一种列式存储的数据库)
spark(基于内存的计算引擎) flink(流式处理的计算引擎)
hadoop 里面,hdfs 数据块是多大一块?
128M
数据默认保存几份?
3
hdfs 里面由哪几个组件构成?
datanode namenode secondarynamenode
hdfs 里面的几个组件,分别有哪些功能和作用?
secondarynamenode:服务器数据的收集,将信息传递给 namenode
namenode:负责和客户端进行沟通
datanode:负责存储数据
hadoop 的基础服务有哪几个?
datanode namenode secondarynamenode jps resourcemanager nodemanager
hdfs 里面,写入数据(上传文件)和读取数据(下载文件),过程流程和原理是
什么?
读取数据:
1.客户端申请某个位置的文件或者数据
2.namenode 响应申请,并且将文件和数据所在的 datanode 节点信息列表返回给客户端
3.客户端根据节点信息去向 datanode 申请数据的读取
4.datanode 响应成功给客户端
5.客户端开始申请读取 block1
6.datanode 返回 block1 的数据
7.持续申请后面的其他 block 数据
8.datanode 持续的返回剩下的其他数据
写入数据:
1.客户端要申请写入一个数据
2.namenode 审核文件和数据的合法性
3.namenode 返回允许的响应
4.客户端开始申请写入
5.namenode 返回 datanode 的节点信息
6.客户端找到 datanode 开始申请写入数据
7.datanode 同意进行数据写入
8.客户端开始上传数据
8.1 datanode 开始向其他的 datanode 申请备份
8.2 其他的 datanode 同意备份
8.3 开始备份
8.4 备份完成
9. datanode 回应客户端表示写入成功
stu@stu-pc:~$ find / -name "hadoop" -type d 2>/dev/null | grep -i hadoop/home/stu/hadoop/usr/local/hadoop/usr/local/hadoop/etc/hadoop/usr/local/hadoop/share/hadoop/usr/local/hadoop/share/doc/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-app/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-common/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-hdfs-httpfs/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-hdfs-httpfs/apidocs/org/apache/hadoop/lib/service/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-auth/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-api/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-registry/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-client/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-1/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/apidocs/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/api/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/api/org/apache/hadoop/lib/service/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs-client/build/source/hadoop-hdfs-project/hadoop-hdfs-client/target/api/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-project-dist/hadoop-common/build/source/hadoop-common-project/hadoop-common/target/api/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs-rbf/build/source/hadoop-hdfs-project/hadoop-hdfs-rbf/target/api/org/apache/hadoop/usr/local/hadoop/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/build/source/hadoop-hdfs-project/hadoop-hdfs/target/api/org/apache/hadoop/usr/local/hive/lib/php/packages/serde/org/apache/hadoop/usr/local/hive/apache-hive-3.1.2-bin/lib/php/packages/serde/org/apache/hadoop/tmp/hadoop/tmp/hadoop-unjar8342885896351558271/org/apache/hadoop/tmp/hadoop-unjar6484726597599168525/org/apache/hadoopstu@stu-pc:~$ ls /usr/local/hadoopbin data etc include lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmpstu@stu-pc:~$ ls /opt/hadoopls: 无法访问'/opt/hadoop': 没有那个文件或目录stu@stu-pc:~$
最新发布
12-09
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值