深入了解Hive Index具体实现

本文详细介绍了Hive版本7之后支持的索引技术,包括索引接口、创建过程及填充数据的方法,并通过测试案例展示如何在Hive中创建和使用索引,提升查询效率。

索引是标准的数据库技术,hive 0.7版本之后支持索引。hive索引采用的不是'one size fites all'的索引实现方式,而是提供插入式接口,并且提供一个具体的索引实现作为参考。Hive的Index接口如下:

public interface HiveIndexHandler extends Configurable {
  /**
   * Determines whether this handler implements indexes by creating an index
   * table.
   * 
   * @return true if index creation implies creation of an index table in Hive;
   *         false if the index representation is not stored in a Hive table
   */
  boolean usesIndexTable();

  /**
   * Requests that the handler validate an index definition and fill in
   * additional information about its stored representation.

   * @throw HiveException if the index definition is invalid with respect to
   *        either the base table or the supplied index table definition
   */
  void analyzeIndexDefinition(
      org.apache.hadoop.hive.metastore.api.Table baseTable,
      org.apache.hadoop.hive.metastore.api.Index index,
      org.apache.hadoop.hive.metastore.api.Table indexTable)
      throws HiveException;

  /**
   * Requests that the handler generate a plan for building the index; the plan
   * should read the base table and write out the index representation.
*/
  List<Task<?>> generateIndexBuildTaskList(
      org.apache.hadoop.hive.ql.metadata.Table baseTbl,
      org.apache.hadoop.hive.metastore.api.Index index,
      List<Partition> indexTblPartitions, List<Partition> baseTblPartitions,
      org.apache.hadoop.hive.ql.metadata.Table indexTbl,
      Set<ReadEntity> inputs, Set<WriteEntity> outputs)
      throws HiveException;

}

 

创建索引的时候,Hive首先调用接口的usesIndexTable方法,判断索引是否是已Hive Table的方式存储(默认的实现是存储在Hive中的)。然后调用analyzeIndexDefinition分析索引创建语句是否合法,如果没有问题将在元数据标IDXS中添加索引表,否则抛出异常。如果索引创建语句中使用with deferred rebuild,在执行alter index xxx_index on xxx rebuild时将调用generateIndexBuildTaskList获取Index的MapReduce,并执行为索引填充数据。

下面是借鉴别人设计的测试索引的例子:

首先生成测试数据:

#! /bin/bash  
#generating 350M raw data.  
i=0  
while [ $i -ne 1000000 ]  
do  
        echo -e "$i\tA decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America."  
        i=$(($i+1))  
done

 

创建测试表:
hive> create table table01( id int, name string)  
    > ROW FORMAT DELIMITED  
    > FIELDS TERMINATED BY '\t';
OK
Time taken: 0.371 seconds
hive> load data local inpath '/home/hadoop/hive_index_test/dual.txt' overwrite into table table01;
Copying data from file:/home/hadoop/hive_index_test/dual.txt
Copying file: file:/home/hadoop/hive_index_test/dual.txt
Loading data to table default.table01
Deleted hdfs://localhost:9000/user/hive/warehouse/table01
OK
Time taken: 13.492 seconds
hive> create table table02 as select id,name as text from table01;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301221042_0006, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201301221042_0006
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201301221042_0006
2013-01-22 11:21:19,639 Stage-1 map = 0%,  reduce = 0%
2013-01-22 11:21:25,678 Stage-1 map = 33%,  reduce = 0%
2013-01-22 11:21:37,754 Stage-1 map = 67%,  reduce = 0%
2013-01-22 11:21:43,788 Stage-1 map = 100%,  reduce = 0%
2013-01-22 11:21:46,828 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201301221042_0006
Ended Job = -663277165, job is filtered out (removed at runtime).
Moving data to: hdfs://localhost:9000/tmp/hive-hadoop/hive_2013-01-22_11-21-13_661_2061036951988537032/-ext-10001
Moving data to: hdfs://localhost:9000/user/hive/warehouse/table02
1000000 Rows loaded to hdfs://localhost:9000/tmp/hive-hadoop/hive_2013-01-22_11-21-13_661_2061036951988537032/-ext-10000
OK
Time taken: 33.904 seconds
hive> dfs -ls /user/hive/warehouse/table02;
Found 6 items
-rw-r--r--   3 hadoop supergroup   67109134 2013-01-22 11:21 /user/hive/warehouse/table02/000000_0
-rw-r--r--   3 hadoop supergroup   67108860 2013-01-22 11:21 /user/hive/warehouse/table02/000001_0
-rw-r--r--   3 hadoop supergroup   67108860 2013-01-22 11:21 /user/hive/warehouse/table02/000002_0
-rw-r--r--   3 hadoop supergroup   67108860 2013-01-22 11:21 /user/hive/warehouse/table02/000003_0
-rw-r--r--   3 hadoop supergroup   67108860 2013-01-22 11:21 /user/hive/warehouse/table02/000004_0
-rw-r--r--   3 hadoop supergroup   21344316 2013-01-22 11:21 /user/hive/warehouse/table02/000005_0
hive> select * from table02 where id=500000;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301221042_0007, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201301221042_0007
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201301221042_0007
2013-01-22 11:22:26,865 Stage-1 map = 0%,  reduce = 0%
2013-01-22 11:22:28,884 Stage-1 map = 33%,  reduce = 0%
2013-01-22 11:22:31,905 Stage-1 map = 67%,  reduce = 0%
2013-01-22 11:22:34,921 Stage-1 map = 100%,  reduce = 0%
2013-01-22 11:22:37,943 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201301221042_0007
OK
500000    A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.
Time taken: 18.551 seconds

创建索引:
hive> create index table02_index on table table02(id)  
    >     as 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler'  
    >     with deferred rebuild;
OK
Time taken: 0.503 seconds

填充索引数据:
hive> alter index table02_index on table02 rebuild;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201301221042_0008, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201301221042_0008
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201301221042_0008
2013-01-22 11:23:56,870 Stage-1 map = 0%,  reduce = 0%
2013-01-22 11:24:02,902 Stage-1 map = 33%,  reduce = 0%
2013-01-22 11:24:08,929 Stage-1 map = 67%,  reduce = 0%
2013-01-22 11:24:11,944 Stage-1 map = 67%,  reduce = 11%
2013-01-22 11:24:14,966 Stage-1 map = 100%,  reduce = 11%
2013-01-22 11:24:21,007 Stage-1 map = 100%,  reduce = 22%
2013-01-22 11:24:27,043 Stage-1 map = 100%,  reduce = 67%
2013-01-22 11:24:30,056 Stage-1 map = 100%,  reduce = 86%
2013-01-22 11:24:33,089 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201301221042_0008
Loading data to table default.default__table02_table02_index__
Deleted hdfs://localhost:9000/user/hive/warehouse/default__table02_table02_index__
Table default.default__table02_table02_index__ stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 74701985]
OK
Time taken: 61.203 seconds
hive> dfs -ls /user/hive/warehouse/default*;
Found 1 items
-rw-r--r--   3 hadoop supergroup   74701985 2013-01-22 11:24 /user/hive/warehouse/default__table02_table02_index__/000000_0

可以看到索引内存储的数据:
hive> select * from default__table02_table02_index__ limit 3;
OK
0    hdfs://localhost:9000/user/hive/warehouse/table02/000000_0    [0]
1    hdfs://localhost:9000/user/hive/warehouse/table02/000000_0    [352]
2    hdfs://localhost:9000/user/hive/warehouse/table02/000000_0    [704]
Time taken: 0.156 seconds

自己做一个索引文件测试:
hive> SET hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
hive> Insert overwrite directory "/tmp/table02_index_data" select `_bucketname`, `_offsets` from   default__table02_table02_index__ where id =500000;  
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301221042_0009, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201301221042_0009
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201301221042_0009
2013-01-22 11:30:23,859 Stage-1 map = 0%,  reduce = 0%
2013-01-22 11:30:26,872 Stage-1 map = 100%,  reduce = 0%
2013-01-22 11:30:29,904 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201301221042_0009
Ended Job = -489547412, job is filtered out (removed at runtime).
Launching Job 2 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301221042_0010, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201301221042_0010
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201301221042_0010
2013-01-22 11:30:35,861 Stage-2 map = 0%,  reduce = 0%
2013-01-22 11:30:38,882 Stage-2 map = 100%,  reduce = 0%
2013-01-22 11:30:41,907 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_201301221042_0010
Moving data to: /tmp/table02_index_data
1 Rows loaded to /tmp/table02_index_data
OK
Time taken: 25.173 seconds
hive> select * from table02 where id =500000;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301221042_0011, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201301221042_0011
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201301221042_0011
2013-01-22 11:31:06,055 Stage-1 map = 0%,  reduce = 0%
2013-01-22 11:31:09,066 Stage-1 map = 33%,  reduce = 0%
2013-01-22 11:31:12,083 Stage-1 map = 67%,  reduce = 0%
2013-01-22 11:31:15,102 Stage-1 map = 100%,  reduce = 0%
2013-01-22 11:31:18,127 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201301221042_0011
OK
500000    A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.
Time taken: 17.533 seconds
hive> Set hive.index.compact.file=/tmp/table02_index_data;
hive> Set hive.optimize.index.filter=false;
hive> Set hive.input.format=org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexInputFormat;  
hive> select * from table02 where id =500000;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301221042_0012, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201301221042_0012
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201301221042_0012
2013-01-22 11:32:14,929 Stage-1 map = 0%,  reduce = 0%
2013-01-22 11:32:17,942 Stage-1 map = 100%,  reduce = 0%
2013-01-22 11:32:20,968 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201301221042_0012
OK
500000    A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.
Time taken: 11.222 seconds

总结:索引表的基本包含几列:1. 源表的索引列;2. _bucketname hdfs中文件地址 3. 索引列在hdfs文件中的偏移量。原理是通过记录索引列在HDFS中的偏移量,精准获取数据,避免全表扫描。

参考资料:http://blog.youkuaiyun.com/liwei_1988/article/details/7319030

转载于:https://www.cnblogs.com/end/archive/2013/01/22/2871147.html

标题SpringBoot智能在线预约挂号系统研究AI更换标题第1章引言介绍智能在线预约挂号系统的研究背景、意义、国内外研究现状及论文创新点。1.1研究背景与意义阐述智能在线预约挂号系统对提升医疗服务效率的重要性。1.2国内外研究现状分析国内外智能在线预约挂号系统的研究与应用情况。1.3研究方法及创新点概述本文采用的技术路线、研究方法及主要创新点。第2章相关理论总结智能在线预约挂号系统相关理论,包括系统架构、开发技术等。2.1系统架构设计理论介绍系统架构设计的基本原则和常用方法。2.2SpringBoot开发框架理论阐述SpringBoot框架的特点、优势及其在系统开发中的应用。2.3数据库设计与管理理论介绍数据库设计原则、数据模型及数据库管理系统。2.4网络安全与数据保护理论讨论网络安全威胁、数据保护技术及其在系统中的应用。第3章SpringBoot智能在线预约挂号系统设计详细介绍系统的设计方案,包括功能模块划分、数据库设计等。3.1系统功能模块设计划分系统功能模块,如用户管理、挂号管理、医生排班等。3.2数据库设计与实现设计数据库表结构,确定字段类型、主键及外键关系。3.3用户界面设计设计用户友好的界面,提升用户体验。3.4系统安全设计阐述系统安全策略,包括用户认证、数据加密等。第4章系统实现与测试介绍系统的实现过程,包括编码、测试及优化等。4.1系统编码实现采用SpringBoot框架进行系统编码实现。4.2系统测试方法介绍系统测试的方法、步骤及测试用例设计。4.3系统性能测试与分析对系统进行性能测试,分析测试结果并提出优化建议。4.4系统优化与改进根据测试结果对系统进行优化和改进,提升系统性能。第5章研究结果呈现系统实现后的效果,包括功能实现、性能提升等。5.1系统功能实现效果展示系统各功能模块的实现效果,如挂号成功界面等。5.2系统性能提升效果对比优化前后的系统性能
在金融行业中,对信用风险的判断是核心环节之一,其结果对机构的信贷政策和风险控制策略有直接影响。本文将围绕如何借助机器学习方法,尤其是Sklearn工具包,建立用于判断信用状况的预测系统。文中将涵盖逻辑回归、支持向量机等常见方法,并通过实际操作流程进行说明。 一、机器学习基本概念 机器学习属于人工智能的子领域,其基本理念是通过数据自动学习规律,而非依赖人工设定规则。在信贷分析中,该技术可用于挖掘历史数据中的潜在规律,进而对未来的信用表现进行预测。 二、Sklearn工具包概述 Sklearn(Scikit-learn)是Python语言中广泛使用的机器学习模块,提供多种数据处理和建模功能。它简化了数据清洗、特征提取、模型构建、验证与优化等流程,是数据科学项目中的常用工具。 三、逻辑回归模型 逻辑回归是一种常用于分类任务的线性模型,特别适用于二类问题。在信用评估中,该模型可用于判断借款人是否可能违约。其通过逻辑函数将输出映射为0到1之间的概率值,从而表示违约的可能性。 四、支持向量机模型 支持向量机是一种用于监督学习的算法,适用于数据维度高、样本量小的情况。在信用分析中,该方法能够通过寻找最佳分割面,区分违约与非违约客户。通过选用不同核函数,可应对复杂的非线性关系,提升预测精度。 五、数据预处理步骤 在建模前,需对原始数据进行清理与转换,包括处理缺失值、识别异常点、标准化数值、筛选有效特征等。对于信用评分,常见的输入变量包括收入水平、负债比例、信用历史记录、职业稳定性等。预处理有助于减少噪声干扰,增强模型的适应性。 六、模型构建与验证 借助Sklearn,可以将数据集划分为训练集和测试集,并通过交叉验证调整参数以提升模型性能。常用评估指标包括准确率、召回率、F1值以及AUC-ROC曲线。在处理不平衡数据时,更应关注模型的召回率与特异性。 七、集成学习方法 为提升模型预测能力,可采用集成策略,如结合多个模型的预测结果。这有助于降低单一模型的偏差与方差,增强整体预测的稳定性与准确性。 综上,基于机器学习的信用评估系统可通过Sklearn中的多种算法,结合合理的数据处理与模型优化,实现对借款人信用状况的精准判断。在实际应用中,需持续调整模型以适应市场变化,保障预测结果的长期有效性。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值