-
Name node is in safe mode
这是因为在分布式文件系统启动的时候,开始的时候会有安全模式,当分布式文件系统处于安全模式的情况下,文件系统中的内容不允许修改也不允许删除,直到安全模式结束。安全模式主要是为了系统启动的时候检查各个DataNode上数据块的有效性,同时根据策略必要的复制或者删除部分数据块。运行期通过命令也可以进入安全模式。在实践过程中,系统启动的时候去修改和删除文件也会有安全模式不允许修改的出错提示,只需要等待一会儿即可。
可以通过以下命令来手动离开安全模式:
hdfs dfsadmin -safemode leave
表现
2017-12-14 18:32:32,336 ERROR [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1513247318192_0014_r_000008_0
2017-12-14 19:20:22,161 ERROR [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Container complete event for unknown container id container_1513249512179_0014_01_000064
解决办法
修改map,reduce内存和堆栈内存
可能是因为设置大了,也可能是因为设置小了。要根据机器配置修改
保证为map 和reduce分配内存的时候不会超过节点的内存
yarn.nodemanager.resource.memory-mb=2G
yarn.scheduler.maximum-allocation-mb=4G
各种报错 安装ntp
yum install ntp
配置ntp
restrict default ignore //默认不允许修改或者查询ntp,并且不接收特殊封包
restrict 127.0.0.1 //给于本机所有权限
restrict 192.168.56.0 mask 255.255.255.0 notrap nomodify //给于局域网机的机器有同步时间的权限
server 192.168.56.121 # local clock
driftfile /var/lib/ntp/drift
fudge 127.127.1.0 stratum 10
启动 ntp:
service ntpd start
设置开机启动:
chkconfig ntpd on
1、首先退出hdfs namenode safe mode
hadoop dfsadmin -safemode leave
2、修复hdfs丢失的块
hadoop fsck /
hdfs fsck / -delete //此方式会将丢失的块全部删除
块
在HDFS中,文件是以块(block)的形式存储的,而HDFS的设计初衷也是用来处理大文件的,使用抽象块正好可以满足这一需求。具体来说,比如一个很大的文件,在单一的节点上存储是不可能的,HDFS使用逻辑块的方式将这个很大的文件分成很多块,分别存储在各个节点机器上,从而实现了大文件的存储。 使用抽象块作为操作的单元,方便了存储系统的管理,具体来说,就是把文件块放在DataNode上存储,把文件的元数据信息放到NameNode上管理。 最后,通过给块设置副本,可以实现容错机制。
1.设置副本数
cdh界面选择hdfs,选择配置设置
dfs.replication=datanode的数量
2.修改之前的副本数
hadoop fs -setrep -R 【1中设置的副本数】/
报错:
[root@hs01 hive]# hive
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
2017-12-20 14:31:54,793 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-common-1.1.0-cdh5.7.1.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:540)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1493)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2935)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2954)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513)
... 8 more
解决办法
Can you please follow below steps and share the result?.
-
Find out the node which hosting hive metastore service, you can use ambari to figure out the ip/hostname for that node.
-
ssh root login to that hive metastore node and execute below commands.
bash# ps -aef|grep -i org.apache.hadoop.hive.metastore.HiveMetaStore
bash# lsof -i:9083
- From some other node try connecting hivemetastore port using telnet.
bash# telnet <hivemeta server> 9083
If 9083 port is not occupied and ps command doesn't show any metastore process then please try to restart the hivemestore service from ambari UI and perform same check again.
报错内容
Query for candidates of org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics and subclasses resulted in no possible candidates
Required table missing : "`CDS`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"
org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : "`CDS`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"
at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485)
先检查hive下是否有mysql驱动程序
解决办法
1.Go to the $HIVE_HOME and run the initschema option on the schematool:
bin/schematool -dbType mysql -initSchema
2.修改hive配置,允许自己建表建视图
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
然后重启
3.查看查看hive 用到的库,比如mysql中的hive库
登录mysql
mysql -uroot -p******
查看表
use hive;
show tables;
select * from [sometable]
表大致如下:
+---------------------------+
| Tables_in_hive |
+---------------------------+
| bucketing_cols |
| cds |
| columns_v2 |
| compaction_queue |
| completed_txn_components |
| database_params |
| db_privs |
| dbs |
| delegation_tokens |
| func_ru |
| funcs |
| global_privs |
| hive_locks |
| idxs |
| index_params |
| master_keys |
| next_compaction_queue_id |
| next_lock_id |
| next_txn_id |
| notification_log |
| notification_sequence |
| nucleus_tables |
| part_col_privs |
| part_col_stats |
| part_privs |
| partition_events |
| partition_key_vals |
| partition_keys |
| partition_params |
| partitions |
| role_map |
| roles |
| sd_params |
| sds |
| sequence_table |
| serde_params |
| serdes |
| skewed_col_names |
| skewed_col_value_loc_map |
| skewed_string_list |
| skewed_string_list_values |
| skewed_values |
| sort_cols |
| tab_col_stats |
| table_params |
| tbl_col_privs |
| tbl_privs |
| tbls |
| txn_components |
| txns |
| type_fields |
| types |
| version |
+---------------------------+
4.终极解决方案
重新安装hive,元数据库改名为hive2或者其他什么名字
报错:
The Network Adapter could not establish the connection
No columns to generate for ClassWriter
Could not load db driver class: com.mysql.jdbc.Driver
解决办法
wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.45/mysql-connector-java-5.1.45.jar
cp mysql-connector-java-5.1.35-bin.jar /opt/cloudera/parcels/CDH-5.0.2-1.cdh5.0.2.p0.13/lib/sqoop/lib/
关闭系统交换区
立即生效
swapoff –a
建议永久关闭
sysctl vm.swappiness=0