实现hive proxy4-scratch目录权限问题解决

博客主要围绕Hive在HDFS中的job中间文件权限问题展开。Hive中间文件默认路径与当前登陆用户有关,实现proxy功能时会出现权限问题。介绍了实现proxy功能的方法,以及解决权限问题的办法,通过更改相关方法和配置,可创建777权限的HDFS临时文件目录。

 

科技小先锋 2017-11-14 01:34:00 浏览419

  hive在hdfs中的job中间文件是根据当前登陆用户产生的,其默认值为/tmp/hive-${user.name},这就导致实现proxy的功能时会遇到临时文件的权限问题,比如在实现了proxy功能后,以超级用户hdfs proxy到普通用户user时,在hdfs中的临时文件在/tmp/hive-user目录中,而目录的属主是hdfs,这时再以普通用户user运行job时,对这个目录就会有权限问题,下面说下这里proxy的实现和解决权限问题的方法:
1.实现proxy功能
更改org.apache.hadoop.hive.ql.Context类的构造方法:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

public Context(Configuration conf, String executionId)  {

  this.conf = conf;

  this.executionId = executionId;

  if(HiveConf.getBoolVar(conf,HiveConf.ConfVars.HIVE_USE_CUSTOM_PROXY)){

      String proxyUser = HiveConf.getVar(conf, HiveConf.ConfVars.HIVE_CUSTOM_PROXY_USER);

      LOG.warn("use custom proxy,gen Scratch path,proxy user is " + proxyUser);

      if(("").equals(proxyUser)||proxyUser == null||("hdfs").equals(proxyUser)){

          nonLocalScratchPath = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.SCRATCHDIR),executionId);

          localScratchDir = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.LOCALSCRATCHDIR),executionId).toUri().getPath();

      }else{

          localScratchDir = new Path((System.getProperty("java.io.tmpdir") + File.separator + proxyUser),executionId).toUri().getPath();

          nonLocalScratchPath = new Path(("/tmp/hive-" + proxyUser),executionId);

          }

  }else{

      nonLocalScratchPath = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.SCRATCHDIR),executionId);

      localScratchDir = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.LOCALSCRATCHDIR),executionId).toUri().getPath();

  }

  LOG.warn("in Context init function nonLocalScratchPath is " + nonLocalScratchPath);

  LOG.warn("in Context init function localScratchPath is " + localScratchDir);

  scratchDirPermission= HiveConf.getVar(conf, HiveConf.ConfVars.SCRATCHDIRPERMISSION);

}

2.权限问题的解决
在上面的代码中可以看到scratchDirPermission的设置,这个是指创建的目录的权限,默认是700,因为是中间文件,我们可以把权限设置的大一点,比如777,在设置了777之后,却发现目录的权限是755.
根据报错的堆栈可以看到方法在Context中调用的情况:

1

getExternalTmpPath--->getExternalScratchDir-->getScratchDir-->Utilities.createDirsWithPermission

(目录不存在时,根据HiveConf.ConfVars.SCRATCHDIRPERMISSION的设置创建hdfs tmp目录)
看下getScratchDir方法:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

private final Map<String, Path> fsScratchDirs = new HashMap<String, Path>();

.....

  private Path getScratchDir(String scheme, String authority,

                               boolean mkdir, String scratchDir) { // 如果是explain语句mkdir为false

    String fileSystem =  scheme + ":" + authority;

    Path dir = fsScratchDirs.get(fileSystem + "-" + TaskRunner.getTaskRunnerID());

    if (dir == null) {

      Path dirPath = new Path(scheme, authority,

          scratchDir + "-" + TaskRunner.getTaskRunnerID());

      if (mkdir) {

        try {

          FileSystem fs = dirPath.getFileSystem( conf);

          dirPath = new Path(fs.makeQualified(dirPath).toString());

          FsPermission fsPermission = new FsPermission(Short.parseShort(scratchDirPermission.trim(), 8)); 

// 目录权限由HiveConf.ConfVars.SCRATCHDIRPERMISSION 设置

          if (!Utilities.createDirsWithPermission(conf , dirPath, fsPermission)) {

            throw new RuntimeException("Cannot make directory: "

                                       + dirPath.toString());

          }

          if (isHDFSCleanup ) {

            fs.deleteOnExit(dirPath);

          }

        catch (IOException e) {

          throw new RuntimeException (e);

        }

      }

      dir = dirPath;

      fsScratchDirs.put(fileSystem + "-" + TaskRunner.getTaskRunnerID(), dir);

    }

    return dir;

  }

调用Utilities.createDirsWithPermission方法时,传入的目录的权限(由HiveConf.ConfVars.SCRATCHDIRPERMISSION 设置)默认是700
org.apache.hadoop.hive.ql.exec.Utilities类的createDirsWithPermission方法内容如下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

public static boolean createDirsWithPermission(Configuration conf, Path mkdir,

    FsPermission fsPermission) throws IOException {

  boolean recursive = false

  if (SessionState.get() != null) {

    recursive = SessionState.get().isHiveServerQuery() &&

        conf.getBoolean(HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.varname,

            HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.defaultBoolVal); 

            //如果是来自hiverserver的请求,并且开启了doas,recursive为true,权限设置为777,umask 000

    fsPermission = new FsPermission((short)00777);

  }

  // if we made it so far without exception we are good!

  return createDirsWithPermission(conf, mkdir, fsPermission, recursive); //默认recursive为false

}

.....

public static boolean createDirsWithPermission(Configuration conf, Path mkdirPath,

    FsPermission fsPermission, boolean recursive) throws IOException {

  String origUmask = null;

  LOG.warn("Create dirs " + mkdirPath + " with permission " + fsPermission + " recursive " +

      recursive);

      

  if (recursive) { //如果recursive为true,origUmask 为000,否则为null

    origUmask = conf.get("fs.permissions.umask-mode");

    // this umask is required because by default the hdfs mask is 022 resulting in

    // all parents getting the fsPermission & !(022) permission instead of fsPermission

    conf.set("fs.permissions.umask-mode""000"); 

  }

  FileSystem fs = ShimLoader.getHadoopShims().getNonCachedFileSystem(mkdirPath.toUri(), conf);

  LOG.warn("fs.permissions.umask-mode is " + conf.get("fs.permissions.umask-mode"));  //默认为022

  boolean retval = false;

  try {

    retval = fs.mkdirs(mkdirPath, fsPermission);

    resetConfAndCloseFS(conf, recursive, origUmask, fs); 

    //这里因为recursive为false,导致不会重置fs.permissions.umask-mode的配置,

    即fs.permissions.umask-mode为022,因此导致即使设置了权限为777,创建的目录权限最终还是为755

  catch (IOException ioe) {

    try {

      resetConfAndCloseFS(conf, recursive, origUmask, fs);

    }

    catch (IOException e) {

      // do nothing - double failure

    }

  }

  return retval;

}

hdfs中关于fs.permissions.umask-mode的配置,默认是002

1

2

public static final String  FS_PERMISSIONS_UMASK_KEY = "fs.permissions.umask-mode";

public static final int     FS_PERMISSIONS_UMASK_DEFAULT = 0022;

为了实现可以创建777权限的临时文件目录,更改createDirsWithPermission方法如下:

1

2

3

4

5

6

7

8

9

10

11

12

public static boolean createDirsWithPermission(Configuration conf, Path mkdir,

    FsPermission fsPermission) throws IOException {

  boolean recursive = false

  if (SessionState.get() != null) {

    recursive = (SessionState.get().isHiveServerQuery() &&

        conf.getBoolean(HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.varname,

            HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.defaultBoolVal))||(HiveConf.getBoolVar(conf,HiveConf.ConfVars.HIVE_USE_CUSTOM_PROXY)); 

    fsPermission = new FsPermission((short)00777);

  }

  // if we made it so far without exception we are good!

  return createDirsWithPermission(conf, mkdir, fsPermission, recursive); 

}

这样,就可以创建出777的hdfs目录了。

pyspark启动初始化2025-06-18 15:43:14,147 INFO conf.HiveConf: Found configuration file file:/D:/pyspark/hive/hive-3.1.1/conf/hive-site.xml Hive Session ID = ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:15,369 INFO SessionState: Hive Session ID = ae15e233-5595-4035-9a63-90e6fef3164c Logging initialized using configuration in jar:file:/D:/pyspark/hive/hive-3.1.1/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true 2025-06-18 15:43:15,415 INFO SessionState: Logging initialized using configuration in jar:file:/D:/pyspark/hive/hive-3.1.1/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true 2025-06-18 15:43:16,270 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:16,274 INFO session.SessionState: Created local directory: D:/pyspark/hive/hive-3.1.1/data/scratch/ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:16,277 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/ae15e233-5595-4035-9a63-90e6fef3164c/_tmp_space.db 2025-06-18 15:43:16,287 INFO conf.HiveConf: Using the default value passed in for log id: ae15e233-5595-4035-9a63-90e6fef3164c 2025-06-18 15:43:16,287 INFO session.SessionState: Updating thread name to ae15e233-5595-4035-9a63-90e6fef3164c main 2025-06-18 15:43:17,092 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-18 15:43:17,111 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-18 15:43:17,114 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-18 15:43:17,116 INFO conf.MetastoreConf: Found configuration file file:/D:/pyspark/hive/hive-3.1.1/conf/hive-site.xml 2025-06-18 15:43:17,117 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-18 15:43:17,118 INFO conf.MetastoreConf: Found configuration file null 2025-06-18 15:43:17,119 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-18 15:43:17,119 INFO conf.MetastoreConf: Found configuration file null 2025-06-18 15:43:17,256 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-18 15:43:17,450 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-18 15:43:17,626 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-18 15:43:17,679 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-18 15:43:17,682 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-18 15:43:17,799 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-18 15:43:17,898 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-18 15:43:17,899 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-18 15:43:18,059 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,060 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,061 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,061 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,061 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:18,062 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,914 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,914 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,915 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,915 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,916 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:19,917 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-18 15:43:23,624 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-18 15:43:23,626 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-18 15:43:24,075 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-18 15:43:24,231 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=aaa (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-18 15:43:24,248 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-18 15:43:24,250 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.Hive Session ID = 69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,766 INFO SessionState: Hive Session ID = 69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,767 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-18 15:43:24,775 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,777 INFO session.SessionState: Created local directory: D:/pyspark/hive/hive-3.1.1/data/scratch/69d15420-0a4b-4d2a-934e-e5662db4697f 2025-06-18 15:43:24,779 INFO session.SessionState: Created HDFS directory: /tmp/hive/aaa/69d15420-0a4b-4d2a-934e-e5662db4697f/_tmp_space.db 2025-06-18 15:43:24,780 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-18 15:43:24,780 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-18 15:43:24,781 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-18 15:43:24,781 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-18 15:43:24,786 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-18 15:43:24,786 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-18 15:43:24,791 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#bclcredits pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,791 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#bclcredits pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,796 INFO metastore.HiveMetaStore: 1: get_multi_table : db=bclcredits tbls= 2025-06-18 15:43:24,796 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_multi_table : db=bclcredits tbls= 2025-06-18 15:43:24,798 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,798 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-18 15:43:24,799 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-18 15:43:24,799 INFO HiveMetaStore.audit: ugi=aaa ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-18 15:43:24,800 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
最新发布
06-19
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值