goahead-3.3.6源码分析

本文介绍了一个Web服务器如何处理浏览器的请求,包括访问网页文件及接口调用逻辑。详细解析了fileHandler函数的工作流程,从判断文件类型到打开文件、设置响应头直至数据传输的全过程。

主要分为访问网页文件以及接口的调用逻辑
首先网页访问流程:网页访问事件注册 程序中需要返回html网页文件已经相关的图片、css等资源

这里写图片描述
上面的流程是直接使用websDefineHandler来把响应网页访问的事务添加到事务表,具体的网页文件返回的逻辑则在fileHandler函数中处理 最后使用send函数发送到对应的TCP链路上

// fileHandler函数判断和处理需要返回到浏览器的html文件数据
static bool fileHandler(Webs *wp)
{
    WebsFileInfo    info;
    char            *tmp, *date;
    ssize           nchars;
    int             code;
    int result;
    assert(websValid(wp));
    assert(wp->method);
    assert(wp->filename && wp->filename[0]);
    printf("%s(%d)filename=%s\n", __FILE__, __LINE__, wp->filename); // filename=/web/index.html
    printf("%s(%d)method=%s\n", __FILE__, __LINE__, wp->method);    // method=GET

    /*
        If the file is a directory, redirect using the nominated default page 
        如果这里的文件路径是一个目录,则使用指定的默认(html)页面
     */
    if (websPageIsDirectory(wp)) { // 返回值 0=文件, 1=目录
        nchars = strlen(wp->path);
        if (wp->path[nchars - 1] == '/' || wp->path[nchars - 1] == '\\') {
            wp->path[--nchars] = '\0';
        }
        tmp = sfmt("%s/%s", wp->path, websIndex);
        websRedirect(wp, tmp);
        wfree(tmp);
        return 1;
    }else{
        printf("%s(%d)call websPageIsDirectory %s, >>> no a directory\n", __FILE__, __LINE__, wp->filename);
    }
    if (websPageOpen(wp, O_RDONLY | O_BINARY, 0666) <= 0) { //这里在移植到wince的过程有所改动,如果打不开(html)文件返回0
        websError(wp, HTTP_CODE_NOT_FOUND, "Cannot open document for: %s", wp->path);
        return 1;
    }
    if (websPageStat(wp, &info) < 0) { // 这里使用了GetFileAttributesEx(与ce系统相关接口)获取文件大小 修改时间等属性
        websError(wp, HTTP_CODE_NOT_FOUND, "Cannot stat page for URL");
        return 1;
    }
    code = 200;
    if (wp->since && info.mtime <= wp->since) {
        // 比较html文件的时间 如果不合法 则返回304
        code = 304;
        info.size = 0;
    }
    websSetStatus(wp, code);
    websWriteHeaders(wp, info.size, 0);
    if ((date = websGetDateString(&info)) != NULL) {
        websWriteHeader(wp, "Last-Modified", "%s", date);
        wfree(date);
    }
    websWriteEndHeaders(wp);

    /*
        All done if the browser did a HEAD request 没有头部请求时调用websDone返回
     */
    if (smatch(wp->method, "HEAD")) {
        websDone(wp);
        return 1;
    }
    if (info.size > 0) {
        websSetBackgroundWriter(wp, fileWriteEvent);
    } else {
        websDone(wp);
    }

    return 1;
}

goahead作为一个web服务器端 需要等待浏览器端的接入 所以需要等待事件的发生 这里使用了阻塞的方式进行事件等待 具体流程如下

这里写图片描述

PUBLIC void websServiceEvents(int *finished){
    WebsTime    delay, nextEvent;
    if (finished) {
        *finished = 0;
    }
    delay = 0;
    while (!finished || !*finished) {
        if (socketSelect(-1, delay)) {
            socketProcess();
        }
        delay = MAXINT;
        nextEvent = websRunEvents();
        delay = min(delay, nextEvent);
    }
}

API注释翻译:
基本事件循环,socketready返回true当一个套接字准备好服务。socketselect将阻塞直到事件发生。socketprocess实际上为服务进程
websServiceEvents函数在main函数的之后调用 处于循环状态 程序不会退出 平时处于阻塞状态等待浏览器端发送请求并进行处理 处理完成进入下一轮等待

这里写图片描述
以上在事件到来时 调用到fileHandler函数里面的流程

hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.3.3.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.3.3.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Null.java tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Null.java: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.8.3.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.8.3.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.3.5.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.3.5.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.8.0.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.8.0.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.3.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.3.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.22.0.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.22.0.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.9.1.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.9.1.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.1.1.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.1.1.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.20.0.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.20.0.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha4.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha4.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.2.0.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.2.0.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.9.2.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.9.2.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha2.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha2.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.2.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.2.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.10.0.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_2.10.0.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.1.0.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.1.0.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.1.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.0.1.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.2.1.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.2.1.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.2.4.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.2.4.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.21.0.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.21.0.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.1.3.xml tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/jdiff/Apache_Hadoop_HDFS_3.1.3.xml: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/hadoop-hdfs-client-3.3.6-tests.jar tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/hadoop-hdfs-client-3.3.6-tests.jar: Cannot open: No such file or directory hadoop-3.3.6/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.6.jar tar: hadoop-3.3.6: Cannot mkdir: Permission denied tar: hadoop-3.3.6/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.6.jar: Cannot open: No such file or directory tar: Exiting with failure status due to previous errors jtx@ubuntu:~$
最新发布
09-28
<think>我们当前需要解决的是在解压hadoop-3.3.6时出现的两个问题: 1. Cannot mkdir: Permission denied (创建目录权限被拒绝) 2. Cannot open: No such file or directory (无法打开:没有这样的文件或目录) 根据引用中提供的信息,我们可以总结出以下可能的原因和解决方案: 对于第一个问题:Cannot mkdir: Permission denied - 原因:当前用户对目标目录没有写权限。 - 解决方案: a) 使用管理员权限(sudo)执行解压命令(如果解压到系统目录如/usr/local,通常需要sudo)。 b) 更改目标目录的所有权,使得当前用户有权限写入(如引用[1]中提到的`chown`命令)。 c) 或者将hadoop解压到当前用户有权限的目录,例如用户主目录下的某个目录。 对于第二个问题:Cannot open: No such file or directory - 原因:可能是解压命令中指定的文件路径不正确,或者文件确实不存在(可能是下载不完整或路径错误)。 - 解决方案: a) 检查文件路径是否正确,确保解压命令中指定的压缩包文件存在。 b) 检查当前目录下是否存在该压缩包,可以使用`ls`命令查看。 c) 如果文件不存在,重新下载hadoop-3.3.6压缩包。 此外,根据引用[3]和引用[4]的信息,我们还需要注意在后续配置Hadoop环境时,确保环境变量设置正确,并且使用正确的用户权限来执行Hadoop相关命令。 具体步骤建议如下: 1. 确保下载的hadoop-3.3.6压缩包完整存在,并且存放的目录当前用户有读取权限。 2. 选择一个合适的解压目录: - 如果希望解压到系统目录(如`/usr/local`),则需要使用sudo权限,命令如下: ```bash sudo tar -xzvf hadoop-3.3.6.tar.gz -C /usr/local ``` - 如果使用当前用户主目录下的目录,例如`~/apps`,则不需要sudo: ```bash tar -xzvf hadoop-3.3.6.tar.gz -C ~/apps ``` 3. 如果解压过程中出现权限问题,可以尝试先更改目标目录的所有权(如果目标目录已经存在且属于其他用户): ```bash sudo chown -R $USER:$USER /usr/local/hadoop # 如果之前已经创建了hadoop目录且权限不对 ``` 或者,在解压后更改整个解压目录的所有权(如果使用sudo解压,那么解压出来的文件属于root,需要更改所有权以便普通用户使用): ```bash sudo chown -R hadoop:hadoop /usr/local/hadoop-3.3.6 # 假设你有一个hadoop用户和组,或者使用你自己的用户名和组 ``` 注意:这里的用户和组应该替换成你自己的。 4. 对于文件不存在的错误,请检查当前目录下是否有`hadoop-3.3.6.tar.gz`文件,或者检查你输入的文件名是否正确(注意大小写和版本号)。 5. 解压完成后,按照引用[3]中的示例配置环境变量(注意路径要根据实际解压目录调整): 修改`~/.bashrc`或`~/.bash_profile`,添加类似以下内容(具体路径根据实际情况): ```bash export JAVA_HOME=/path/to/your/jdk export HADOOP_HOME=/usr/local/hadoop-3.3.6 # 或者你解压到的目录 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ``` 然后执行`source ~/.bashrc`使环境变量生效。 6. 后续在运行Hadoop时,如果出现权限问题(如引用[2]和引用[4]),可能需要使用HDFS的命令来设置目录权限,例如: ```bash hdfs dfs -chmod 777 / # 不推荐,这里只是示例,实际生产环境需要严格控制权限 ``` 或者使用正确的用户来执行命令(如hdfs用户)。 注意:在Linux系统中,操作权限是非常重要的,尤其是在多用户环境下。请根据实际需求和安全策略来分配合适的权限。 综上所述,我们按照步骤来解决解压时的问题。 请用户按照上述步骤尝试解决问题。 </think>针对解压 Hadoop 3.3.6 时出现的权限不足和文件不存在问题,结合引用内容提供以下解决方案: ### 📂 一、解决权限问题(Cannot mkdir: Permission denied) 1. **更改解压目录所有权** 使用管理员权限将目标目录所有权赋予当前用户(替换 `your_user` 为实际用户名): ```bash sudo chown -R your_user:your_user /usr/local/hadoop ``` 如引用[1]的操作: ```bash chown -R hadoop:hadoop hadoop-0.20.2/ ``` 2. **使用 sudo 解压** 若需解压到系统目录(如 `/usr/local`),需提升权限: ```bash sudo tar -xzvf hadoop-3.3.6.tar.gz -C /usr/local ``` 3. **修改目录权限** 解压后赋予目录读写权限(参考引用[2]): ```bash sudo chmod 777 /usr/local/hadoop # 临时方案,生产环境需严格权限控制 ``` ### 🔍 二、解决文件不存在问题(Cannot open: No such file or directory) 1. **检查文件路径** 确认压缩包存在且路径正确: ```bash ls -l hadoop-3.3.6.tar.gz ``` 2. **完整下载压缩包** 重新下载并校验哈希值(官网提供 SHA-256): ```bash wget https://archive.apache.org/dist/hadoop/core/hadoop-3.3.6/hadoop-3.3.6.tar.gz sha256sum hadoop-3.3.6.tar.gz ``` 3. **修复环境变量配置** 如引用[3]所示,确保 `$HADOOP_HOME` 等变量正确定义: ```bash export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin source ~/.bashrc # 立即生效 ``` ### ⚙️ 三、后续配置建议 1. **HDFS 权限修复** 若启动后出现 `Permission denied`(参考引用[4]): ```bash hdfs dfs -chmod 777 / # 临时开放根目录权限 hadoop fs -mkdir /user # 创建用户目录 ``` 2. **日志目录处理** 手动创建缺失目录并赋权(引用[1][3]): ```bash mkdir -p /usr/local/hadoop/logs chown -R hadoop:hadoop /usr/local/hadoop/logs ``` 3. **用户切换** 始终使用专属 Hadoop 用户操作(避免 root 权限问题): ```bash su - hadoop # 切换到hadoop用户 ``` ### 💡 根本原因总结 | 问题类型 | 主要原因 | 解决方案 | |----------|----------|----------| | **权限拒绝** | 用户对目录无写权限 | `chown`/`chmod` 赋权或使用 `sudo` | | **文件不存在** | 压缩包路径错误/下载不完整 | 检查路径/重新下载校验 | | **后续报错** | 环境变量缺失或权限配置错误 | 完善环境变量/HDFS显式赋权 | > ✨ **关键提示**:解压后需完整配置 `hadoop-env.sh` 中的 `JAVA_HOME` 和 `HADOOP_LOG_DIR`(参考引用[3]),否则启动时仍可能报错。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值