EC11-17

11. 在operator= 中处理自我赋值

看下面代码感受自我赋值的危险

class Bitmap{...}
class Widget{
    ...
    private:
    Bitmap *pb;
}

Widget& Widget::operator=(const Widget&rhs)
{
    delete pb;
    pb = new Bitmap(*rhs.pb);
    return *this;
}

如果上述代码new Bitmap时出错,抛出异常,那么就会出现this对象的pb已经被释放掉了,导致return了一个野指针,不安全。

解决方法:在复制pb所指东西之前不delete pb

Widget& Widget::operator=(const Widget&rhs)
{
    Bitmap *originPb = rhs.pb;
    pb = new Bitmap(*rhs.pb);
    delete originPb;
    return *this;
}

或者用copy and swap技术

Widget& Widget::operator=( Widget rhs)
{
   swap(rhs);
    return *this;
}

总结

确保对象自我赋值是有良好行为,相关技术有:“证同测试”、在成功new之后再delete 被赋值的指针、copy and swap技术,从而实现任何函数操作一个以上对象,并且这些对象都是继承同一个基类对象时,行为仍然正确。

12. 复制对象时勿忘其每个成分

当自己写copy构造函数和copy赋值操作符(下称为copying函数)定义时,如果遗漏了对某个成员变量复制,编译器不会提醒。所以当自己写copying函数时,只要在类中添加了一个成员变量,必须同时修改coping函数和自己写的构造函数&赋值操作符。

总结:当自己写copying函数,确保复制了所有类的local成员变量,并调用所有基类内的适当copying函数。

13. 以对象管理资源

1)资源一旦使用了,将来不用的时候就要还给系统。资源包括:动态分配内存,文件描述器,互斥锁,图形界面中的字形和笔刷,数据库连接和网络sockets。

2)把资源放进对象内,便可依赖于析构函数自动调用机制确保资源被释放。

3)“以资源管理资源”的2个关键想法:

  • RAII。资源取得时机就是初始化时机
  • 管理对象运用析构函数确保资源被释放。

4)auto_ptr若发生复制行为,则将唯一的资源拥有权转移出去。而share_ptr发生复制行为,只是把资源拥有权复制一份。但两者的析构函数都是执行delete操作,若要想析构函数执行delete操作,那么需要使用boost::scoped_arraty和boost::shared_array

总结:为防止资源泄漏,使用RAII对象,即使其在构造函数中获取资源并在析构函数中释放资源。常用的RAII对象有tr1::shared_ptr和auto_ptr。

14. 在资源管理类中小心coping行为

赋值RAII对象必须一并赋值它所管理的资源,因此资源的copying行为决定RAII对象的copying行为。

RAII class copying行为有:不给copying或向shared_ptr一样实现引用计数法或复制底部资源。

15.在资源管理类中提供对原始资源的访问

有时候形参要求是某类的指针,这是可以用tr1::shared_ptr和auto_ptr提供的get成员函数执行显式转换,返回的是智能指针内部的原始指针的复件。

总结:APIs总会要求访问原始资源,所以如果是自己实现RAII类,那么应该提供一个区的原始指针的方法。

16. 成对使用new和delete时要采取相同形式

总结:如果调用new时使用[],那么对应调用delete时使用[];如果调用new是没有用[],那么对应调用delete时不用[]。即new [] --> delete [] ; new --> delete;

PS:为避免出现不可预料错误,尽量不要对数组形式做typedefs动作。

eg:

typedef std::string AddressLines[4];
std::string *pal = new AddressLines;
delete pal; // error
delete[] pal; // right
17. 以独立语句将newed对象置入智能指针

因为不清楚编译器以何种次序将new的对象放入智能指针,所以要使用分离语句,先new指针,然后将它置入智能指针内,最后将智能指针传给函数。

我的自建hadoop集群,3台master和4台core,我在部署完后发现master1的hadoop/logs下的log文件不知道什么原因不写入了,只有out文件在写。正常是out文件很小log文件比较大对吧[ec2-user@hadoop-master01 logs]$ ll total 51876 -rw-rw-r-- 1 ec2-user ec2-user 1019984 Sep 28 18:27 hadoop-ec2-user-historyserver-hadoop-master01.log -rw-rw-r-- 1 ec2-user ec2-user 130399 Oct 16 09:56 hadoop-ec2-user-historyserver-hadoop-master01.out -rw-rw-r-- 1 ec2-user ec2-user 45247 Oct 15 16:22 hadoop-ec2-user-historyserver-hadoop-master01.out.1 -rw-rw-r-- 1 ec2-user ec2-user 51151 Oct 15 16:17 hadoop-ec2-user-historyserver-hadoop-master01.out.2 -rw-rw-r-- 1 ec2-user ec2-user 116382 Oct 15 14:05 hadoop-ec2-user-historyserver-hadoop-master01.out.3 -rw-rw-r-- 1 ec2-user ec2-user 46976 Oct 14 14:33 hadoop-ec2-user-historyserver-hadoop-master01.out.4 -rw-rw-r-- 1 ec2-user ec2-user 46688 Oct 14 13:53 hadoop-ec2-user-historyserver-hadoop-master01.out.5 -rw-rw-r-- 1 ec2-user ec2-user 12621739 Sep 28 18:28 hadoop-ec2-user-journalnode-hadoop-master01.log -rw-rw-r-- 1 ec2-user ec2-user 1339323 Oct 16 09:58 hadoop-ec2-user-journalnode-hadoop-master01.out -rw-rw-r-- 1 ec2-user ec2-user 257238 Oct 15 18:11 hadoop-ec2-user-journalnode-hadoop-master01.out.1 -rw-rw-r-- 1 ec2-user ec2-user 65845 Oct 15 16:22 hadoop-ec2-user-journalnode-hadoop-master01.out.2 -rw-rw-r-- 1 ec2-user ec2-user 66349 Oct 15 16:17 hadoop-ec2-user-journalnode-hadoop-master01.out.3 -rw-rw-r-- 1 ec2-user ec2-user 55910 Oct 15 15:57 hadoop-ec2-user-journalnode-hadoop-master01.out.4 -rw-rw-r-- 1 ec2-user ec2-user 3240752 Oct 15 15:49 hadoop-ec2-user-journalnode-hadoop-master01.out.5 -rw-rw-r-- 1 ec2-user ec2-user 1900111 Oct 16 09:57 hadoop-ec2-user-namenode-hadoop-master01.out -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 18:09 hadoop-ec2-user-namenode-hadoop-master01.out.1 -rw-rw-r-- 1 ec2-user ec2-user 240450 Oct 15 18:09 hadoop-ec2-user-namenode-hadoop-master01.out.2 -rw-rw-r-- 1 ec2-user ec2-user 241594 Oct 15 18:05 hadoop-ec2-user-namenode-hadoop-master01.out.3 -rw-rw-r-- 1 ec2-user ec2-user 236717 Oct 15 17:57 hadoop-ec2-user-namenode-hadoop-master01.out.4 -rw-rw-r-- 1 ec2-user ec2-user 198509 Oct 15 17:40 hadoop-ec2-user-namenode-hadoop-master01.out.5 -rw-rw-r-- 1 ec2-user ec2-user 10266208 Sep 28 18:27 hadoop-ec2-user-resourcemanager-hadoop-master01.log -rw-rw-r-- 1 ec2-user ec2-user 765717 Oct 16 09:23 hadoop-ec2-user-resourcemanager-hadoop-master01.out -rw-rw-r-- 1 ec2-user ec2-user 577288 Oct 15 16:22 hadoop-ec2-user-resourcemanager-hadoop-master01.out.1 -rw-rw-r-- 1 ec2-user ec2-user 572997 Oct 15 16:17 hadoop-ec2-user-resourcemanager-hadoop-master01.out.2 -rw-rw-r-- 1 ec2-user ec2-user 573291 Oct 15 15:33 hadoop-ec2-user-resourcemanager-hadoop-master01.out.3 -rw-rw-r-- 1 ec2-user ec2-user 727212 Oct 15 15:20 hadoop-ec2-user-resourcemanager-hadoop-master01.out.4 -rw-rw-r-- 1 ec2-user ec2-user 848229 Oct 15 15:05 hadoop-ec2-user-resourcemanager-hadoop-master01.out.5 -rw-rw-r-- 1 ec2-user ec2-user 482956 Sep 28 18:27 hadoop-ec2-user-timelineserver-hadoop-master01.log -rw-rw-r-- 1 ec2-user ec2-user 2669030 Oct 16 09:26 hadoop-ec2-user-timelineserver-hadoop-master01.out -rw-rw-r-- 1 ec2-user ec2-user 1069795 Oct 15 16:22 hadoop-ec2-user-timelineserver-hadoop-master01.out.1 -rw-rw-r-- 1 ec2-user ec2-user 1076432 Oct 15 16:17 hadoop-ec2-user-timelineserver-hadoop-master01.out.2 -rw-rw-r-- 1 ec2-user ec2-user 1070341 Oct 15 15:33 hadoop-ec2-user-timelineserver-hadoop-master01.out.3 -rw-rw-r-- 1 ec2-user ec2-user 1064610 Oct 15 15:20 hadoop-ec2-user-timelineserver-hadoop-master01.out.4 -rw-rw-r-- 1 ec2-user ec2-user 3670854 Oct 15 15:06 hadoop-ec2-user-timelineserver-hadoop-master01.out.5 -rw-rw-r-- 1 ec2-user ec2-user 5036578 Sep 28 18:28 hadoop-ec2-user-zkfc-hadoop-master01.log -rw-rw-r-- 1 ec2-user ec2-user 99301 Oct 15 18:12 hadoop-ec2-user-zkfc-hadoop-master01.out -rw-rw-r-- 1 ec2-user ec2-user 195445 Oct 15 18:11 hadoop-ec2-user-zkfc-hadoop-master01.out.1 -rw-rw-r-- 1 ec2-user ec2-user 80419 Oct 15 16:22 hadoop-ec2-user-zkfc-hadoop-master01.out.2 -rw-rw-r-- 1 ec2-user ec2-user 79427 Oct 15 16:17 hadoop-ec2-user-zkfc-hadoop-master01.out.3 -rw-rw-r-- 1 ec2-user ec2-user 80419 Oct 15 15:57 hadoop-ec2-user-zkfc-hadoop-master01.out.4 -rw-rw-r-- 1 ec2-user ec2-user 79427 Oct 15 15:49 hadoop-ec2-user-zkfc-hadoop-master01.out.5 -rw-rw-r-- 1 ec2-user ec2-user 0 Sep 17 11:54 SecurityAuth-ec2-user.audit [ec2-user@hadoop-master01 logs]$ pwd /data/module/hadoop-3.3.4/logs [ec2-user@hadoop-master02 hadoop-3.3.4]$ cd logs/ [ec2-user@hadoop-master02 logs]$ ll total 142064 -rw-rw-r-- 1 ec2-user ec2-user 41175348 Oct 16 09:58 hadoop-ec2-user-journalnode-hadoop-master02.log -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 18:12 hadoop-ec2-user-journalnode-hadoop-master02.out -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 16:23 hadoop-ec2-user-journalnode-hadoop-master02.out.1 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 16:18 hadoop-ec2-user-journalnode-hadoop-master02.out.2 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 15:57 hadoop-ec2-user-journalnode-hadoop-master02.out.3 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 15:49 hadoop-ec2-user-journalnode-hadoop-master02.out.4 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 13 17:36 hadoop-ec2-user-journalnode-hadoop-master02.out.5 -rw-rw-r-- 1 ec2-user ec2-user 87261368 Oct 16 09:57 hadoop-ec2-user-namenode-hadoop-master02.log -rw-rw-r-- 1 ec2-user ec2-user 6371 Oct 15 18:13 hadoop-ec2-user-namenode-hadoop-master02.out -rw-rw-r-- 1 ec2-user ec2-user 6371 Oct 15 17:46 hadoop-ec2-user-namenode-hadoop-master02.out.1 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 16:18 hadoop-ec2-user-namenode-hadoop-master02.out.2 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 15:57 hadoop-ec2-user-namenode-hadoop-master02.out.3 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 15:49 hadoop-ec2-user-namenode-hadoop-master02.out.4 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 13 17:36 hadoop-ec2-user-namenode-hadoop-master02.out.5 -rw-rw-r-- 1 ec2-user ec2-user 11161047 Oct 16 05:24 hadoop-ec2-user-resourcemanager-hadoop-master02.log -rw-rw-r-- 1 ec2-user ec2-user 2218 Oct 15 16:23 hadoop-ec2-user-resourcemanager-hadoop-master02.out -rw-rw-r-- 1 ec2-user ec2-user 2218 Oct 15 16:18 hadoop-ec2-user-resourcemanager-hadoop-master02.out.1 -rw-rw-r-- 1 ec2-user ec2-user 2218 Oct 15 15:34 hadoop-ec2-user-resourcemanager-hadoop-master02.out.2 -rw-rw-r-- 1 ec2-user ec2-user 2218 Oct 15 15:21 hadoop-ec2-user-resourcemanager-hadoop-master02.out.3 -rw-rw-r-- 1 ec2-user ec2-user 2218 Oct 15 15:06 hadoop-ec2-user-resourcemanager-hadoop-master02.out.4 -rw-rw-r-- 1 ec2-user ec2-user 2218 Oct 15 14:23 hadoop-ec2-user-resourcemanager-hadoop-master02.out.5 -rw-rw-r-- 1 ec2-user ec2-user 5751751 Oct 15 18:12 hadoop-ec2-user-zkfc-hadoop-master02.log -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 18:12 hadoop-ec2-user-zkfc-hadoop-master02.out -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 16:23 hadoop-ec2-user-zkfc-hadoop-master02.out.1 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 16:18 hadoop-ec2-user-zkfc-hadoop-master02.out.2 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 15:58 hadoop-ec2-user-zkfc-hadoop-master02.out.3 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 15 15:49 hadoop-ec2-user-zkfc-hadoop-master02.out.4 -rw-rw-r-- 1 ec2-user ec2-user 693 Oct 13 17:36 hadoop-ec2-user-zkfc-hadoop-master02.out.5 -rw-rw-r-- 1 ec2-user ec2-user 0 Sep 17 11:55 SecurityAuth-ec2-user.audit [ec2-user@hadoop-master02 logs]$ pwd /data/module/hadoop-3.3.4/logs 我不记得对master01做过什么了 但他的log就是不写日志了
10-17
<think>Hmm, the user is asking about why log files aren't being written to the hadoop/logs directory on master1 in their self-built Hadoop cluster with 3 masters and 4 cores. This seems like a troubleshooting scenario for Hadoop cluster operations. First, I need to recall the key components involved in Hadoop logging. From the references provided earlier, I see that Hadoop's configuration plays a crucial role, especially files like hadoop-env.sh which sets environment variables including logging paths. Looking at the user's cluster setup - 3 masters likely handling NameNode high availability (active/standby setup based on reference 2 and 5), and 4 core nodes for data/compute. The fact that only master1 has this issue suggests it's node-specific rather than cluster-wide. The most probable causes would be: Permissions issues - Hadoop service account (usually 'hadoop') needs write access to the logs directory. Reference 3 mentions container-executor.cfg which relates to YARN permissions, but that's more for containers than NameNode logging. Configuration errors - The HADOOP_LOG_DIR setting in hadoop-env.sh might point elsewhere on master1. Since reference 1 shows how configurations propagate via scp, there might be local overrides. Service status - If the NameNode isn't running properly on master1 (could be standby node), it might not generate logs. Reference 5 discusses bootstrapping standby nodes which relates to this. I should also consider disk space issues and selinux/apparmor restrictions since those commonly block log writes. The journalnode process mentioned in reference 2 could be relevant if it's part of the HA setup. The troubleshooting steps should start simple - verify permissions and configs locally before checking service status across the cluster. Since it's a multi-master setup, we need to confirm whether master1 is currently active or standby, as standby nodes might have different logging behavior. </think>在自建的3 Master + 4 Core节点的Hadoop集群中,master1的`hadoop/logs`目录无日志写入,需按以下步骤排查: ### 一、权限问题(最常见原因) 1. **目录所有权检查** 确保日志目录属主为Hadoop服务账户(通常为`hadoop`用户): ```bash ls -ld /path/to/hadoop/logs ``` ✅ 正确示例:`drwxr-xr-x 2 hadoop hadoop 4096 ...` 2. **修复权限命令** ```bash chown -R hadoop:hadoop /path/to/hadoop/logs chmod 1775 /path/to/hadoop/logs # 添加粘滞位防止误删 ``` ### 二、配置错误 1. **检查`hadoop-env.sh`配置** 确认日志路径变量`HADOOP_LOG_DIR`指向正确位置: ```bash grep HADOOP_LOG_DIR $HADOOP_HOME/etc/hadoop/hadoop-env.sh ``` ✅ 正确配置应为: `export HADOOP_LOG_DIR=/path/to/hadoop/logs` *注意:参考引用[1]强调此文件控制环境路径[^1]。* 2. **验证配置文件同步** 若集群配置未同步到master1,需重新同步: ```bash # 从配置主节点同步至master1(参考引用[2]方法) scp -r config_host:$HADOOP_HOME/etc/hadoop/* hadoop@master1:$HADOOP_HOME/etc/hadoop/ ``` ### 三、服务状态异常 1. **检查NameNode进程状态** ```bash # 在master1执行 jps | grep -E 'NameNode|JournalNode' ``` ❌ 若无`NameNode`或`JournalNode`进程,说明服务未启动: ```bash # 启动NameNode(HA模式下) hdfs --daemon start namenode # 启动JournalNode(参考引用[2]高可用组件[^2]) hdfs --daemon start journalnode ``` 2. **查看系统日志定位启动失败原因** ```bash grep -i error /var/log/messages | grep hadoop ``` ### 四、磁盘空间问题 1. **检查存储状态** ```bash df -h /path/to/hadoop/logs # 验证磁盘空间 df -i /path/to/hadoop/logs # 验证inode数量 ``` ### 五、SELinux/AppArmor限制 1. **临时禁用测试** ```bash setenforce 0 # CentOS/RHEL systemctl stop apparmor # Ubuntu ``` 若日志恢复写入,需调整安全策略: ```bash audit2allow -a # 生成新策略(根据审计日志) ``` ### 六、日志框架故障 1. **检查Log4j配置** ```bash grep log4j $HADOOP_HOME/etc/hadoop/*.xml ``` 确认`log4j.properties`存在且路径正确(默认在`$HADOOP_HOME/etc/hadoop/`) ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值