经典Hadoop书籍介绍

1.Hadoop: The Definitive Guide(Hadoop权威指南)

这本书很全,Hadoop中的圣经级教材,不过看起来挺累。

内容简介

Discover how Apache Hadoop can unleash the power of your data. This comprehensive resource shows you how to build and maintain reliable, scalable, distributed systems with the Hadoop framework -- an open source implementation of MapReduce, the algorithm on which Google built its empire. Programmers will find details for analyzing datasets of any size, and administrators will learn how to set up and run Hadoop clusters.

This revised edition covers recent changes to Hadoop, including new features such as Hive, Sqoop, and Avro. It also provides illuminating case studies that illustrate how Hadoop is used to solve specific problems. Looking to get the most out of your data? This is your book.

  • Use the Hadoop Distributed File System (HDFS) for storing large datasets, then run distributed computations over those datasets with MapReduce
  • Become familiar with Hadoop’s data and I/O building blocks for compression, data integrity, serialization, and persistence
  • Discover common pitfalls and advanced features for writing real-world MapReduce programs
  • Design, build, and administer a dedicated Hadoop cluster, or run Hadoop in the cloud
  • Use Pig, a high-level query language for large-scale data processing
  • Analyze datasets with Hive, Hadoop’s data warehousing system
  • Take advantage of HBase, Hadoop’s database for structured and semi-structured data
  • Learn ZooKeeper, a toolkit of coordination primitives for building distributed systems

"Now you have the opportunity to learn about Hadoop from a master -- not only of the technology, but also of common sense and plain talk."

2.Hadoop in Action

这本书是入门推荐。

内容简介

"Hadoop in Action" teaches readers how to use Hadoop and write MapReduce programs. The intended readers are programmers, architects, and project managers who have to process large amounts of data offline. "Hadoop in Action" will lead the reader from obtaining a copy of Hadoop to setting it up in a cluster and writing data analytic programs. The book begins by making the basic idea of Hadoop and MapReduce easier to grasp by applying the default Hadoop installation to a few easy-to-follow tasks, such as analyzing changes in word frequency across a body of documents. The book continues through the basic concepts of MapReduce applications developed using Hadoop, including a close look at framework components, use of Hadoop for a variety of data analysis tasks, and numerous examples of Hadoop in action. "Hadoop in Action" will explain how to use Hadoop and present design patterns and practices of programming MapReduce. MapReduce is a complex idea both conceptually and in its implementation, and Hadoop users are challenged to learn all the knobs and levers for running Hadoop. This book takes you beyond the mechanics of running Hadoop, teaching you to write meaningful programs in a MapReduce framework. This book assumes the reader will have a basic familiarity with Java, as most code examples will be written in Java. Familiarity with basic statistical concepts (e.g. histogram, correlation) will help the reader appreciate the more advanced data processing examples.

3.Pro Hadoop

这本书据说挺好。

商品描述

You've heard the hype about Hadoop: it runs petabyte-scale data mining tasks insanely fast, it runs gigantic tasks on clouds for absurdly cheap, it's been heavily committed to by tech giants like IBM, Yahoo , and the Apache Project, and it's completely open source (thus free). But what exactly is it, and more importantly, how do you even get a Hadoop cluster up and running? From Apress, the name you've come to trust for hands-on technical knowledge, Pro Hadoop brings you up to speed on Hadoop. You learn the ins and outs of MapReduce; how to structure a cluster, design, and implement the Hadoop file system; and how to build your first cloud-computing tasks using Hadoop. Learn how to let Hadoop take care of distributing and parallelizing your software--you just focus on the code, Hadoop takes care of the rest. Best of all, you'll learn from a tech professional who's been in the Hadoop scene since day one. Written from the perspective of a principal engineer with down-in-the-trenches knowledge of what to do wrong with Hadoop, you learn how to avoid the common, expensive first errors that everyone makes with creating their own Hadoop system or inheriting someone else's. Skip the novice stage and the expensive, hard-to-fix mistakes...go straight to seasoned pro on the hottest cloud-computing framework with Pro Hadoop. Your productivity will blow your managers away. What you'll learn Set up a stand-alone Hadoop cluster the smart way, laid out simply and step by step so you can get up and running quickly to build your next data center, collaborative, data-intensive Internet services application, Software as a Service (SaaS), and more. Optimize your Hadoop production tasks like an experienced pro. Work with time-proven, bulletproof standard patterns that have been tested and debugged in high-volume production. Understand just enough theoretical knowledge to know why something works in Hadoop, without getting bogged down in abstruse walls of theory. Get detailed explanations of not only how to do something with Hadoop, but also why, from a front-line coder with years in the Hadoop game. Turn someone else's expensive cluster-wide "wrong" into an orderly, productive "right" with professional-level debugging and testing. Who is this book for? IT professionals interested in investigating Hadoop and implementing it in their organizations, and existing Hadoop users who want to deepen their professional toolkits. About the Apress Pro Series The Apress Pro series books are practical, professional tutorials to keep you on and moving up the professional ladder. You have gotten the job, now you need to hone your skills in these tough competitive times. The Apress Pro series expands your skills and expertise in exactly the areas you need. Master the content of a Pro book, and you will always be able to get the job done in a professional development project. Written by experts in their field, Pro series books from Apress give you the hard-won solutions to problems you will face in your professional programming career.

三本书的下载地址:http://dl.dbank.com/c0h9pycwed

Hadoop 项目主页:http://hadoop.apache.org   一个分布式系统基础架构,由Apache基金会开发。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力高速运算和存储。 起源:Google的集群系统   Google的数据中心使用廉价的Linux PC机组成集群,在上面运行各种应用。即使是分布式开发的新手也可以迅速使用Google的基础设施。核心组件是3个:   1、GFS(Google File System)。一个分布式文件系统,隐藏下层负载均衡,冗余复制等细节,对上层程序提供一个统一的文件系统API接口。Google根据自己的需求对它进行了特别优化,包括:超大文件的访问,读操作比例远超过写操作,PC机极易发生故障造成节点失效等。GFS把文件分成64MB的块,分布在集群的机器上,使用Linux的文件系统存放。同时每块文件至少有3份以上的冗余。中心是一个Master节点,根据文件索引,找寻文件块。详见Google的工程师发布的GFS论文。   2、MapReduce。Google发现大多数分布式运算可以抽象为MapReduce操作。Map是把输入Input分解成中间的Key/Value对,Reduce把Key/Value合成最终输出Output。这两个函数由程序员提供给系统,下层设施把Map和Reduce操作分布在集群上运行,并把结果存储在GFS上。   3、BigTable。一个大型的分布式数据库,这个数据库不是关系式的数据库。像它的名字一样,就是一个巨大的表格,用来存储结构化的数据。   以上三个设施Google均有论文发表。 开源实现   Hadoop是项目的总称,起源于作者儿子的一只吃饱了的大象的名字。主要是由HDFS、MapReduce和Hbase组成。   HDFS是Google File System(GFS)的开源实现。   MapReduce是Google MapReduce的开源实现。   HBase是Google BigTable的开源实现。   这个分布式框架很有创造性,而且有极大的扩展性,使得Google在系统吞吐量上有很大的竞争力。因此Apache基金会用Java实现了一个开源版本,支持Fedora、Ubuntu等Linux平台。目前Hadoop受到Yahoo的支持,有Yahoo员工长期工作在项目上,而且Yahoo内部也准备使用Hadoop代替原来的的分布式系统。   Hadoop实现了HDFS文件系统和MapRecue。用户只要继承MapReduceBase,提供分别实现Map和Reduce的两个类,并注册Job即可自动分布式运行。   目前Release版本是0.20.1。还不成熟,但是已经集群规模已经可以达到4000个节点,是由Yahoo!实验室中构建的。下面是此集群的相关数据:   • 4000 节点   • 2 x quad core Xeons@2.5ghz per 节点   • 4 x 1TB SATA Disk per 节点   • 8G RAM per 节点   • 千兆带宽 per 节点   • 每机架有40个节点   • 每个机架有4千兆以太网上行链路   • Redhat Linux AS4 ( Nahant update 5 )   • Sun Java JDK1.6.0_05 - b13   • 所以整个集群有30000多个CPU,近16PB的磁盘空间!   HDFS把节点分成两类:NameNode和DataNode。NameNode是唯一的,程序与之通信,然后从DataNode上存取文件。这些操作是透明的,与普通的文件系统API没有区别。   MapReduce则是JobTracker节点为主,分配工作以及负责和用户程序通信。   HDFS和MapReduce实现是完全分离的,并不是没有HDFS就不能MapReduce运算。   Hadoop也跟其他云计算项目有共同点和目标:实现海量数据的计算。而进行海量计算需要一个稳定的,安全的数据容器,才有了Hadoop分布式文件系统(HDFS,Hadoop Distributed File System)。   HDFS通信部分使用org.apache.hadoop.ipc,可以很快使用RPC.Server.start()构造一个节点,具体业务功能还需自己实现。针对HDFS的业务则为数据流的读写,NameNode/DataNode的通信等。   MapReduce主要在org.apache.hadoop.mapred,实现提供的接口类,并完成节点通信(可以不是hadoop通信接口),就能进行MapReduce运算。   目前这个项目还
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值