hive的版本是hive-2.1.1
日志放在哪?
Hive中的日志分为两种
1. 系统日志,记录了hive的运行情况,错误状况。
2. Job 日志,记录了Hive 中job的执行的历史过程。
系统日志存储在什么地方呢 ?
在hive/conf/ hive-log4j.properties 文件中记录了Hive日志的存储情况,
默认的存储情况:
hive.root.logger=WARN,DRFA
hive.log.dir=/tmp/${user.name} # 默认的存储位置
hive.log.file=hive.log # 默认的文件名
Job日志又存储在什么地方呢 ?
//Location of Hive run time structured log file
HIVEHISTORYFILELOC("hive.querylog.location", "/tmp/" + System.getProperty("user.name")),
默认存储与 /tmp/{user.name}目录下。
注意:在建立JDBC连接时候的连接是 hive2,hive从此以后的hiveserver是hive2了
出现了错误:User: hadoop is not allowed to impersonate anonymous
解决方案
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
JDBCUtil.java
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class JDBCUtils {
private static String driver = "org.apache.hive.jdbc.HiveDriver";
private static String url = "jdbc:hive2://192.168.56.100:10000/default";
//注册驱动
static {
try {
Class.forName(driver);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
//获取连接
public static Connection getConnection(){
try {
return DriverManager.getConnection(url,"root","root");
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
//释放资源
public static void release(Connection conn,Statement st,ResultSet rs){
if(rs != null){
try {
rs.close();
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}finally {
rs = null;
}
}
if(st != null){
try {
st.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}finally {
st = null;
}
}
if(conn != null){
try {
conn.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}finally {
conn = null;
}
}
}
}
HiveJDBCDemon.java
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
public class HIveJDBCDemon {
public static void main(String[] args) {
Connection conn = null;
Statement st = null;
ResultSet rs = null;
String sql = "select * from word_counts";
try {
//获取连接
conn = JDBCUtils.getConnection();
System.out.println(conn);
//创建运行环境
st = conn.createStatement();
//运行HSQL
rs = st.executeQuery(sql);
//处理数据
while(rs.next()){
String word = rs.getString(1);
int count = rs.getInt(2);
System.out.println(word + "\t" + count);
}
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}finally {
JDBCUtils.release(conn, st, rs);
}
}
}
//下面是hadoop 官网的详解
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/Superusers.html
Configurations
You can configure proxy user using properties hadoop.proxyuser.$superuser.hosts along with either or both of hadoop.proxyuser.$superuser.groups and hadoop.proxyuser.$superuser.users.
By specifying as below in core-site.xml, the superuser named super can connect only from host1 and host2 to impersonate a user belonging to group1 and group2.
<property>
<name>hadoop.proxyuser.super.hosts</name>
<value>host1,host2</value>
</property>
<property>
<name>hadoop.proxyuser.super.groups</name>
<value>group1,group2</value>
</property>
If these configurations are not present, impersonation will not be allowed and connection will fail.
If more lax security is preferred, the wildcard value * may be used to allow impersonation from any host or of any user. For example, by specifying as below in core-site.xml, user named oozie accessing from any host can impersonate any user belonging to any group.
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
The hadoop.proxyuser.$superuser.hosts accepts list of ip addresses, ip address ranges in CIDR format and/or host names. For example, by specifying as below, user named super accessing from hosts in the range 10.222.0.0-15 and 10.113.221.221 can impersonate user1 and user2.
<property>
<name>hadoop.proxyuser.super.hosts</name>
<value>10.222.0.0/16,10.113.221.221</value>
</property>
<property>
<name>hadoop.proxyuser.super.users</name>
<value>user1,user2</value>
</property>