Software testing 10 rules

本文提供了软件测试领域的十大最佳实践,包括早期频繁测试、整合应用开发与测试周期、制定全面测试计划等内容,旨在帮助团队提高软件质量并降低开发成本。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. Test early and test often.

2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop.

3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.

5. Use both static and dynamic testing.

6. Define your expected results.

7. Understand the business reason behind the application. You'll write a better application and better testing scripts.

8. Use multiple levels and types of testing (regression, systems, integration, stress and load).

9. Review and inspect the work, it will lower costs.

10. Don't let your programmers check their own work; they'll miss their own errors. 

by Mikhail Rakhunov SQAtester.com contributor

# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hadoop-specific environment variables here. ## ## THIS FILE ACTS AS THE MASTER FILE FOR ALL HADOOP PROJECTS. ## SETTINGS HERE WILL BE READ BY ALL HADOOP COMMANDS. THEREFORE, ## ONE CAN USE THIS FILE TO SET YARN, HDFS, AND MAPREDUCE ## CONFIGURATION OPTIONS INSTEAD OF xxx-env.sh. ## ## Precedence rules: ## ## {yarn-env.sh|hdfs-env.sh} > hadoop-env.sh > hard-coded defaults ## ## {YARN_xyz|HDFS_xyz} > HADOOP_xyz > hard-coded defaults ## # Many of the options here are built from the perspective that users # may want to provide OVERWRITING values on the command line. # For example: # # JAVA_HOME=/usr/java/testing hdfs dfs -ls # # Therefore, the vast majority (BUT NOT ALL!) of these defaults # are configured for substitution and not append. If append # is preferable, modify this file accordingly. ### # Generic settings for HADOOP ### # Technically, the only required environment variable is JAVA_HOME. # All others are optional. However, the defaults are probably not # preferred. Many sites configure these options outside of Hadoop, # such as in /etc/profile.d # The java implementation to use. By default, this environment # variable is REQUIRED on ALL platforms except OS X! # export JAVA
03-18
<think>嗯,用户想了解在Hadoop的hadoop-env.sh文件中如何正确设置JAVA_HOME和其他环境变量。首先,我需要回忆一下Hadoop的配置相关知识。记得hadoop-env.sh是用来配置Hadoop运行时的环境变量,比如Java的路径、内存参数等。用户提到的JAVA_HOME是关键,因为Hadoop依赖Java环境,如果设置不正确,Hadoop可能无法启动。 然后,我需要参考用户提供的引用内容。引用[3]提到hadoop-env.sh主要配置Java home、软件目录以及运行时的变量,比如栈大小和内存设置。引用[4]也说明这个文件涉及JVM的相关配置,生产环境中需要调整参数。所以,设置JAVA_HOME是基础,同时可能还需要调整内存参数以优化性能。 接下来,我需要考虑用户可能的操作步骤。首先找到hadoop-env.sh文件的位置,通常在Hadoop安装目录的etc/hadoop/下。然后用文本编辑器打开,找到被注释掉的JAVA_HOME行,取消注释并设置正确的路径。例如,export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64。用户可能会遇到路径错误的问题,所以需要提醒他们使用绝对路径,并用which java或ls -l命令验证路径是否正确。 另外,用户可能还想设置其他环境变量,比如HADOOP_HEAPSIZE,这会影响Hadoop进程的内存使用。需要解释这些参数的作用,以及如何根据服务器资源进行调整。例如,如果机器内存较大,可以适当增加堆大小,但要注意不要超过物理内存限制,避免OOM错误。 还要考虑不同Hadoop版本的差异,比如3.x版本可能需要不同的配置方式,或者某些参数已被弃用。需要提醒用户查阅对应版本的官方文档,确保配置正确。同时,配置文件修改后需要重启Hadoop服务才能生效,这也是重要的步骤,不能遗漏。 用户可能的疑问还包括如何验证配置是否生效。可以建议通过运行hadoop version命令检查Java路径,或者查看日志文件确认没有启动错误。此外,环境变量设置错误可能导致Hadoop组件无法启动,这时候查看日志中的错误信息是关键步骤。 最后,生成相关问题时要围绕配置文件的更多参数、内存优化、不同版本的差异以及环境变量错误排查等方面。这样能帮助用户进一步扩展他们的知识和解决可能遇到的问题。</think>### Hadoop配置文件中环境变量设置方法 在Hadoop集群配置中,`hadoop-env.sh`是控制Hadoop运行时环境的核心配置文件。以下是具体设置步骤和注意事项: #### 一、设置JAVA_HOME 1. **定位文件** 进入Hadoop安装目录的配置文件夹: ```bash cd $HADOOP_HOME/etc/hadoop ``` 2. **编辑配置文件** 使用文本编辑器打开`hadoop-env.sh`,找到以下行并取消注释: ```bash export JAVA_HOME=/your/java/path # 例如:/usr/lib/jvm/java-8-openjdk-amd64 ``` [^3]提示必须使用**绝对路径**,否则会导致Hadoop启动失败[^4]。 3. **验证路径** 通过命令验证Java安装路径: ```bash which javac # 获取JDK路径 ls -l /usr/bin/java | grep jvm # 查看符号链接指向的真实路径 ``` #### 二、其他关键环境变量 | 变量名 | 作用 | 示例值 | |-----------------------|----------------------------------------------------------------------|------------------------| | `HADOOP_HEAPSIZE` | 控制Hadoop守护进程(如NameNode)的堆内存大小 | `export HADOOP_HEAPSIZE=2048` | | `HADOOP_LOG_DIR` | 定义日志文件存储路径 | `export HADOOP_LOG_DIR=/var/log/hadoop` | | `HADOOP_OPTS` | 全局JVM参数设置(如GC策略、编码格式) | `export HADOOP_OPTS="-Dfile.encoding=UTF-8"` | | `HADOOP_CLASSPATH` | 添加自定义类库路径 | `export HADOOP_CLASSPATH=/opt/lib/*.jar` | #### 三、生产环境建议 1. **内存优化** 根据服务器物理内存调整堆大小,例如64GB内存的集群可设置为: ```bash export HADOOP_HEAPSIZE=16384 # 单位MB ``` 但需为操作系统保留至少2GB内存。 2. **垃圾回收优化** 在`hadoop-env.sh`中添加GC参数: ```bash export HADOOP_OPTS="-XX:+UseG1GC -XX:MaxGCPauseMillis=200" ``` #### 四、配置验证 1. 执行环境检查命令: ```bash hadoop checknative ``` 2. 查看启动日志中的Java路径: ```bash grep "JAVA_HOME" $HADOOP_LOG_DIR/hadoop-*-namenode*.log ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值