caffe 输出信息分析+debug_info

本文解析了Caffe训练过程中的输出信息,包括网络结构、各层创建状态、迭代信息等,并介绍了如何通过这些信息了解训练进度及网络表现。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1 caffe输出基本信息

caffe在训练的时候屏幕会输出程序运行的状态信息,通过查看状态信息方便查看程序运行是否正常,且方便查找bug.
caffe debug信息默认是不开启的,此时的输出信息的总体结构如下所示:
这里写图片描述

1.1 solver信息加载并显示

这里写图片描述

1.2 train网络结构输出

这里写图片描述

1.3 Train 各层创建状态信息
I0821 09:53:35.572999 10308 layer_factory.hpp:77] Creating layer mnist    ####创建第一层
I0821 09:53:35.572999 10308 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0821 09:53:35.572999 10308 net.cpp:100] Creating Layer mnist
I0821 09:53:35.572999 10308 net.cpp:418] mnist -> data
I0821 09:53:35.572999 10308 net.cpp:418] mnist -> label
I0821 09:53:35.572999 10308 data_transformer.cpp:25] Loading mean file from: ....../image_mean.binaryproto
I0821 09:53:35.579999 11064 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0821 09:53:35.580999 11064 db_lmdb.cpp:40] Opened lmdb ......./train/trainlmdb
I0821 09:53:35.623999 10308 data_layer.cpp:41] output data size: 100,3,32,32  ###输出blob尺寸
I0821 09:53:35.628999 10308 net.cpp:150] Setting up mnist
I0821 09:53:35.628999 10308 net.cpp:157] Top shape: 100 3 32 32 (307200)
I0821 09:53:35.628999 10308 net.cpp:157] Top shape: 100 (100)
I0821 09:53:35.628999 10308 net.cpp:165] Memory required for data: 1229200
I0821 09:53:35.628999 10308 layer_factory.hpp:77] Creating layer conv1   ##### 创建第二层
I0821 09:53:35.628999 10308 net.cpp:100] Creating Layer conv1
I0821 09:53:35.628999 10308 net.cpp:444] conv1 <- data
I0821 09:53:35.628999 10308 net.cpp:418] conv1 -> conv1
I0821 09:53:35.629999  7532 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0821 09:53:35.909999 10308 net.cpp:150] Setting up conv1
I0821 09:53:35.909999 10308 net.cpp:157] Top shape: 100 64 28 28 (5017600)  #### 输出blob尺寸
I0821 09:53:35.909999 10308 net.cpp:165] Memory required for data: 21299600
.
.
.
I0821 09:53:35.914000 10308 layer_factory.hpp:77] Creating layer loss
I0821 09:53:35.914000 10308 net.cpp:150] Setting up loss
I0821 09:53:35.914000 10308 net.cpp:157] Top shape: (1)
I0821 09:53:35.914000 10308 net.cpp:160]     with loss weight 1
I0821 09:53:35.914000 10308 net.cpp:165] Memory required for data: 49322804
I0821 09:53:35.914000 10308 net.cpp:226] loss needs backward computation.    ######## 各层反向传播信息
I0821 09:53:35.914000 10308 net.cpp:226] ip2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] relu3 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] ip1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] pool2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] relu2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] conv2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] pool1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] relu1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] conv1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:228] mnist does not need backward computation.
I0821 09:53:35.914000 10308 net.cpp:270] This network produces output loss   ######## 网络输出节点个数及名称(重要),后续参数输出均是此节点的信息 ###############
I0821 09:53:35.914000 10308 net.cpp:283] Network initialization done.  ###网络创建完成
1、通过查看网络创建信息科了解网络节点blob大小
2、可知道网络后续最终输出信息
3、test的创建过程与train类似,此处不再重复说明
1.4 Train 和Test网络的迭代信息输出
I0821 09:53:35.929999 10308 solver.cpp:60] Solver scaffolding done.
I0821 09:53:35.929999 10308 caffe.cpp:252] Starting Optimization     ####### 开始网络训练
I0821 09:53:35.929999 10308 solver.cpp:279] Solving LeNet
I0821 09:53:35.929999 10308 solver.cpp:280] Learning Rate Policy: multistep
I0821 09:53:35.930999 10308 solver.cpp:337] Iteration 0, Testing net (#0)                                                           #### Test(Iteration 0)
I0821 09:53:35.993999 10308 blocking_queue.cpp:50] Data layer prefetch queue empty
I0821 09:53:36.180999 10308 solver.cpp:404]     Test net output #0: accuracy = 0.1121                                           #### Test(Iteration 0)网络输出节点0,accuracy信息。(由网络定义决定)
I0821 09:53:36.180999 10308 solver.cpp:404]     Test net output #1: loss = 2.30972 (* 1 = 2.30972 loss)     #### Test(Iteration 0)网络输出节点1,loss信息。       (由网络定义决定)

I0821 09:53:36.190999 10308 solver.cpp:228] Iteration 0, loss = 2.2891                                                                      #### Tain(Iteration 0) 网络loss值
I0821 09:53:36.190999 10308 solver.cpp:244]     Train net output #0: loss = 2.2891 (* 1 = 2.2891 loss)    #### Tain(Iteration 0) 只有一个输出值
I0821 09:53:36.190999 10308 sgd_solver.cpp:106] Iteration 0, lr = 0.001                                                                     #### Tain(Iteration 0)

I0821 09:53:36.700999 10308 solver.cpp:228] Iteration 100, loss = 2.24716                                                                   #### Tain(Iteration 100)
I0821 09:53:36.700999 10308 solver.cpp:244]     Train net output #0: loss = 2.24716 (* 1 = 2.24716 loss)    #### Tain(Iteration 100)
I0821 09:53:36.700999 10308 sgd_solver.cpp:106] Iteration 100, lr = 0.001                                                                   #### Tain(Iteration 100)
I0821 09:53:37.225999 10308 solver.cpp:228] Iteration 200, loss = 2.08563
I0821 09:53:37.225999 10308 solver.cpp:244]     Train net output #0: loss = 2.08563 (* 1 = 2.08563 loss)
I0821 09:53:37.225999 10308 sgd_solver.cpp:106] Iteration 200, lr = 0.001
I0821 09:53:37.756000 10308 solver.cpp:228] Iteration 300, loss = 2.11631
I0821 09:53:37.756000 10308 solver.cpp:244]     Train net output #0: loss = 2.11631 (* 1 = 2.11631 loss)
I0821 09:53:37.756000 10308 sgd_solver.cpp:106] Iteration 300, lr = 0.001
I0821 09:53:38.286999 10308 solver.cpp:228] Iteration 400, loss = 1.89424
I0821 09:53:38.286999 10308 solver.cpp:244]     Train net output #0: loss = 1.89424 (* 1 = 1.89424 loss)
I0821 09:53:38.286999 10308 sgd_solver.cpp:106] Iteration 400, lr = 0.001
I0821 09:53:38.819999 10308 solver.cpp:337] Iteration 500, Testing net (#0)                                                             #### Test(Iteration 500)
I0821 09:53:39.069999 10308 solver.cpp:404]     Test net output #0: accuracy = 0.3232                                           #### Test(Iteration 500)
I0821 09:53:39.069999 10308 solver.cpp:404]     Test net output #1: loss = 1.87822 (* 1 = 1.87822 loss)     #### Test(Iteration 500)
I0821 09:53:39.072999 10308 solver.cpp:228] Iteration 500, loss = 1.94478
I0821 09:53:39.072999 10308 solver.cpp:244]     Train net output #0: loss = 1.94478 (* 1 = 1.94478 loss)
I0821 09:53:39.072999 10308 sgd_solver.cpp:106] Iteration 500, lr = 0.001
从输出可以看出,Train和Test一次输出周期如下:
  • Test 一次训练周期
    –Iteration 0, Testing net (#0)
    –Test net output #0: accuracy = 0.1121
    –Test net output #1: loss = 2.30972 (* 1 = 2.30972 loss)
  • Train 一次训练周期
    –Iteration 0, loss = 2.2891
    –Train net output #0: loss = 2.2891 (* 1 = 2.2891 loss)
    –Iteration 0, lr = 0.001

2 log信息解析

2 debug info信息

  • 在solver 中添加 debug_info:true
  • 开启caffe的debug信息输出
import os
import re
import extract_seconds
import argparse
import csv
from collections import OrderedDict

def get_datadiff_paradiff(line,data_row,para_row,data_list,para_list,L_list,L_row,top_list,top_row,iteration):
    regex_data=re.compile('\[Backward\] Layer (\S+), bottom blob (\S+) diff: ([\.\deE+-]+)')
    regex_para=re.compile('\[Backward\] Layer (\S+), param blob (\d+) diff: ([\.\deE+-]+)')
    regex_L1L2=re.compile('All net params \(data, diff\): L1 norm = \(([\.\deE+-]+), ([\.\deE+-]+)\); L2 norm = \(([\.\deE+-]+), ([\.\deE+-]+)\)')

    regex_topdata=re.compile('\[Forward\] Layer (\S+), (\S+) blob (\S+) data: ([\.\deE+-]+)')
    #regex_toppara=re.compile('')

    out_match_data=regex_data.search(line)
    if out_match_data or iteration>-1:
        if not data_row or iteration>-1 :
            if data_row:
                data_row['NumIters']=iteration
                data_list.append(data_row)
            data_row = OrderedDict()
        if out_match_data : 
            layer_name=out_match_data.group(1)
            blob_name=out_match_data.group(2)
            data_diff_value=out_match_data.group(3)
            key=layer_name+'-'+blob_name
            data_row[key]=float(data_diff_value)

    out_match_para=regex_para.search(line)
    if out_match_para or iteration>-1:
        if not para_row or iteration>-1:
            if para_row:
                para_row['NumIters']=iteration
                para_list.append(para_row)
            para_row=OrderedDict()
        if out_match_para:
            layer_name=out_match_para.group(1)
            param_d=out_match_para.group(2)
            para_diff_value=out_match_para.group(3)
            layer_name=layer_name+'-blob'+'-'+param_d
            para_row[layer_name]=para_diff_value

    out_match_norm=regex_L1L2.search(line)
    if out_match_norm or iteration>-1:
        if not L_row or iteration>-1:
            if L_row:
                L_row['NumIters']=iteration
                L_list.append(L_row)
            L_row=OrderedDict()
        if out_match_norm:
            L_row['data-L1']=out_match_norm.group(1)
            L_row['diff-L1']=out_match_norm.group(2)
            L_row['data-L2']=out_match_norm.group(3)
            L_row['diff-L2']=out_match_norm.group(4)


    out_match_top=regex_topdata.search(line)
    if out_match_top or iteration>-1:
        if not top_row or iteration>-1:
            if top_row:
                top_row['NumIters']=iteration
                top_list.append(top_row)
            top_row=OrderedDict()
        if out_match_top:
            layer_name=out_match_top.group(1)
            top_para=out_match_top.group(2)
            blob_or_num=out_match_top.group(3)
            key=layer_name+'-'+top_para+'-'+blob_or_num
            data_value=out_match_top.group(4)
            top_row[key]=float(data_value)


    return data_list,data_row,para_list,para_row,L_list,L_row,top_list,top_row



def parse_log(path_to_log):
    """Parse log file
    Returns (train_dict_list, train_dict_names, test_dict_list, test_dict_names)

    train_dict_list and test_dict_list are lists of dicts that define the table
    rows

    train_dict_names and test_dict_names are ordered tuples of the column names
    for the two dict_lists
    """

    regex_iteration = re.compile('Iteration (\d+)')
    regex_train_iteration=re.compile('Iteration (\d+), loss')
    regex_train_output = re.compile('Train net output #(\d+): (\S+) = ([\.\deE+-]+)')
    regex_test_output = re.compile('Test net output #(\d+): (\S+) = ([\.\deE+-]+)')
    regex_learning_rate = re.compile('lr = ([-+]?[0-9]*\.?[0-9]+([eE]?[-+]?[0-9]+)?)')
    regex_backward = re.compile('\[Backward\] Layer ')


    # Pick out lines of interest
    iteration = -1
    train_iter=-1
    learning_rate = float('NaN')
    train_dict_list = []
    test_dict_list = []
    train_row = None
    test_row = None


    data_diff_list=[]
    para_diff_list=[]
    L1L2_list=[]
    top_list=[]
    data_diff_row=None
    para_diff_row=None
    L1L2_row = None
    top_row=None



    logfile_year = extract_seconds.get_log_created_year(path_to_log)
    with open(path_to_log) as f:
        start_time = extract_seconds.get_start_time(f, logfile_year)

        for line in f:
            iteration_match = regex_iteration.search(line)
            train_iter_match=regex_train_iteration.search(line)

            if iteration_match:
                iteration = float(iteration_match.group(1))
            if train_iter_match:
                train_iter=float(train_iter_match.group(1))

            if iteration == -1:
                # Only start parsing for other stuff if we've found the first
                # iteration
                continue


            time = extract_seconds.extract_datetime_from_line(line,
                                                              logfile_year)
            seconds = (time - start_time).total_seconds()

            learning_rate_match = regex_learning_rate.search(line)
            if learning_rate_match:
                learning_rate = float(learning_rate_match.group(1))


            back_match=regex_backward.search(line)
           # if back_match:
            data_diff_list,data_diff_row,para_diff_list,para_diff_row,L1L2_list,L1L2_row,top_list,top_row=get_datadiff_paradiff(
                    line,data_diff_row,para_diff_row,
                    data_diff_list,para_diff_list,
                    L1L2_list,L1L2_row,
                    top_list,top_row,
                    train_iter
                    )
            train_iter=-1

            train_dict_list, train_row = parse_line_for_net_output(
                regex_train_output, train_row, train_dict_list,
                line, iteration, seconds, learning_rate
            )
            test_dict_list, test_row = parse_line_for_net_output(
                regex_test_output, test_row, test_dict_list,
                line, iteration, seconds, learning_rate
            )

    fix_initial_nan_learning_rate(train_dict_list)
    fix_initial_nan_learning_rate(test_dict_list)

    return train_dict_list, test_dict_list,data_diff_list,para_diff_list,L1L2_list,top_list


def parse_line_for_net_output(regex_obj, row, row_dict_list,
                              line, iteration, seconds, learning_rate):
    """Parse a single line for training or test output

    Returns a a tuple with (row_dict_list, row)
    row: may be either a new row or an augmented version of the current row
    row_dict_list: may be either the current row_dict_list or an augmented
    version of the current row_dict_list
    """

    output_match = regex_obj.search(line)
    if output_match:
        if not row or row['NumIters'] != iteration:
            # Push the last row and start a new one
            if row:
                # If we're on a new iteration, push the last row
                # This will probably only happen for the first row; otherwise
                # the full row checking logic below will push and clear full
                # rows
                row_dict_list.append(row)

            row = OrderedDict([
                ('NumIters', iteration),
                ('Seconds', seconds),
                ('LearningRate', learning_rate)
            ])

        # output_num is not used; may be used in the future
        # output_num = output_match.group(1)
        output_name = output_match.group(2)
        output_val = output_match.group(3)
        row[output_name] = float(output_val)

    if row and len(row_dict_list) >= 1 and len(row) == len(row_dict_list[0]):
        # The row is full, based on the fact that it has the same number of
        # columns as the first row; append it to the list
        row_dict_list.append(row)
        row = None

    return row_dict_list, row


def fix_initial_nan_learning_rate(dict_list):
    """Correct initial value of learning rate

    Learning rate is normally not printed until after the initial test and
    training step, which means the initial testing and training rows have
    LearningRate = NaN. Fix this by copying over the LearningRate from the
    second row, if it exists.
    """

    if len(dict_list) > 1:
        dict_list[0]['LearningRate'] = dict_list[1]['LearningRate']


def save_csv_files(logfile_path, output_dir, train_dict_list, test_dict_list,data_diff_list, para_diff_list,L1L2_list,top_list,
                   delimiter=',', verbose=False):
    """Save CSV files to output_dir

    If the input log file is, e.g., caffe.INFO, the names will be
    caffe.INFO.train and caffe.INFO.test
    """

    log_basename = os.path.basename(logfile_path)

    train_filename = os.path.join(output_dir, log_basename + '.train')
    write_csv(train_filename, train_dict_list, delimiter, verbose)

    test_filename = os.path.join(output_dir, log_basename + '.test')
    write_csv(test_filename, test_dict_list, delimiter, verbose)


    data_diff_filename=os.path.join(output_dir, log_basename + '.datadiff')
    write_csv(data_diff_filename, data_diff_list, delimiter, verbose)

    para_diff_filename=os.path.join(output_dir, log_basename + '.paradiff')
    write_csv(para_diff_filename, para_diff_list, delimiter, verbose)

    L1L2_filename=os.path.join(output_dir, log_basename + '.L1L2')
    write_csv(L1L2_filename, L1L2_list, delimiter, verbose)

    topdata_filename=os.path.join(output_dir, log_basename + '.topdata')
    write_csv(topdata_filename, top_list, delimiter, verbose)


def write_csv(output_filename, dict_list, delimiter, verbose=False):
    """Write a CSV file
    """

    if not dict_list:
        if verbose:
            print('Not writing %s; no lines to write' % output_filename)
        return

    dialect = csv.excel
    dialect.delimiter = delimiter

    with open(output_filename, 'w') as f:
        dict_writer = csv.DictWriter(f, fieldnames=dict_list[0].keys(),
                                     dialect=dialect)
        dict_writer.writeheader()
        dict_writer.writerows(dict_list)
    if verbose:
        print 'Wrote %s' % output_filename


def parse_args():
    description = ('Parse a Caffe training log into two CSV files '
                   'containing training and testing information')
    parser = argparse.ArgumentParser(description=description)

    parser.add_argument('logfile_path',
                        help='Path to log file')

    parser.add_argument('output_dir',
                        help='Directory in which to place output CSV files')

    parser.add_argument('--verbose',
                        action='store_true',
                        help='Print some extra info (e.g., output filenames)')

    parser.add_argument('--delimiter',
                        default=',',
                        help=('Column delimiter in output files '
                              '(default: \'%(default)s\')'))

    args = parser.parse_args()
    return args


def main():
    args = parse_args()
    train_dict_list, test_dict_list,data_diff_list,para_diff_list,L1L2_list,top_list = parse_log(args.logfile_path)
    save_csv_files(args.logfile_path, args.output_dir, train_dict_list,
                   test_dict_list, data_diff_list, para_diff_list, L1L2_list,top_list,delimiter=args.delimiter)


if __name__ == '__main__':
    main()

这里写图片描述

参数变化趋势图。

<think>好的,用户遇到了编译OpenPose时的错误,我需要仔细分析他提供的日志信息。首先,用户之前安装了CUDA 11.8,现在编译出现的问题可能与CUDA配置有关。 日志显示在链接阶段完成了98%后,出现了Makefile的错误。错误代码2通常表示编译过程中的某个步骤失败。关键点在于最后的错误信息指向了openpose_lib的构建失败,尤其是与Caffe相关的部分。 首先,我需要考虑可能的CUDA版本兼容性问题。OpenPose可能对CUDA版本有特定要求,虽然用户装了11.8,但也许项目需要更旧或更新的版本。用户需要检查项目文档,确认支持的CUDA版本。 其次,检查CMake配置是否正确。用户可能在配置时没有正确指定CUDA路径,或者启用了不支持的选项,如GPU架构。例如,如果用户的GPU是较旧的型号,可能需要调整计算能力参数,比如使用`-DCUDA_ARCH=...`。 另外,依赖项缺失也是一个常见问题。Caffe需要一些库如protobuf、hdf5等,如果系统没有安装这些依赖,或者版本不兼容,可能导致编译失败。用户需要确保所有依赖项都已正确安装,并且版本符合要求。 内存不足也可能导致链接阶段失败,尤其是在处理大型项目时。建议用户检查系统内存和交换空间,或者尝试使用并行编译减少内存使用。 查看更详细的错误日志是解决问题的关键。用户需要运行`make VERBOSE=1`来获取具体的错误信息,比如某个文件编译失败的具体原因,是否有未定义的引用或头文件缺失等。 还可能涉及到gcc/g++版本与CUDA的兼容性。CUDA 11.8可能对编译器版本有特定要求,用户需要确认当前使用的gcc版本是否被支持,必要时进行降级或升级。 最后,建议用户参考OpenPose的官方文档或社区支持,看看是否有已知的问题和解决方案。如果问题依然存在,提供详细的错误日志和系统环境信息将有助于进一步诊断。</think>以下是针对OpenPose编译错误的逐步排查与解决方案: --- ### 一、错误定位分析 根据错误日志: ```bash [ 98%] Linking CXX executable cpp_classification/classification [ 98%] Built target classification make[3]: *** [Makefile:136: all] Error 2 # 编译系统级错误 make[2]: *** [CMakeFiles/openpose_lib.dir/build.make:86: caffe/src/openpose_lib-stamp/openpose_lib-build] Error 2 # Caffe子项目构建失败 ``` **核心问题**:在构建`openpose_lib`时,Caffe相关组件编译失败 --- ### 二、关键排查步骤 #### 1. 检查CUDA兼容性 ```bash # 确认CUDA版本与OpenPose要求匹配 nvcc --version # 应为CUDA 11.x(OpenPose最新版支持11.x) cat /usr/local/cuda/version.txt # 检查GPU计算能力是否被支持 nvidia-smi -q | grep "Compute Capability" # 例如:7.5(Turing)、8.6(Ampere) ``` **解决方案**: 在CMake时显式指定计算能力(例如RTX 3060需`-DCUDA_ARCH=...`): ```bash cmake .. -DCUDA_ARCH="[你的计算能力]" # 如"-DCUDA_ARCH=86"对应RTX 3090 ``` #### 2. 验证CMake配置 ```bash # 清理旧构建 rm -rf build/ mkdir build && cd build # 重新配置时添加调试参数 cmake .. \ -DBUILD_CAFFE=ON \ -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-11.8 \ -DCUDNN_INCLUDE_DIR=/usr/local/cuda-11.8/include \ -DCUDNN_LIBRARY=/usr/local/cuda-11.8/lib64/libcudnn.so \ -DCMAKE_BUILD_TYPE=Release # 或Debug ``` **注意**:若使用cuDNN,需确保路径正确 #### 3. 检查依赖项完整性 ```bash # 安装关键依赖 sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev \ libopencv-dev libhdf5-serial-dev libgflags-dev libgoogle-glog-dev \ liblmdb-dev protobuf-compiler ``` #### 4. 查看详细错误日志 ```bash # 重新编译并输出详细日志 make clean make -j$(nproc) VERBOSE=1 2>&1 | tee build.log # 在build.log中搜索"error:"定位具体错误 grep -iRn "error:" build.log ``` --- ### 三、常见修复方案 #### 场景1:CUDA头文件缺失 ```log fatal error: cudnn.h: No such file or directory ``` **解决**: ```bash sudo cp /usr/local/cuda-11.8/include/cudnn.h /usr/local/cuda/include/ sudo cp /usr/local/cuda-11.8/lib64/libcudnn* /usr/local/cuda/lib64/ ``` #### 场景2:Protobuf版本冲突 ```log undefined reference to `google::protobuf::internal::...' ``` **解决**: ```bash # 卸载冲突版本 sudo apt-get remove libprotobuf-dev protobuf-compiler # 手动编译安装指定版本(如3.20.x) wget https://github.com/protocolbuffers/protobuf/releases/download/v3.20.1/protobuf-cpp-3.20.1.tar.gz tar -xzf protobuf-cpp-3.20.1.tar.gz cd protobuf-3.20.1 ./configure --prefix=/usr/local/protobuf make -j$(nproc) sudo make install ``` #### 场景3:内存不足导致链接失败 **现象**:编译到98%时卡死或报错 **解决**: ```bash # 限制并行编译线程数 make -j4 # 根据实际内存调整(每线程约需2GB内存) ``` --- ### 四、快速恢复方案 ```bash # 1. 完全清理环境 cd openpose git clean -xdf # 警告:会删除所有未跟踪文件 # 2. 使用Docker编译(官方推荐) docker pull cmuopenpose/openpose:latest docker run -it --gpus all cmuopenpose/openpose /bin/bash ``` --- ### 五、关键配置文件参考 在`CMakeLists.txt`中添加以下配置: ```cmake set(Caffe_KNOWN_GPU_ARCHS "30 35 50 52 60 61 70 75 80 86") set(Caffe_GPU_ARCHS "75") # 根据实际GPU调整 ``` --- 请根据具体错误日志选择对应方案。若问题仍未解决,可提供以下信息: 1. `build.log`中关键错误段落 2. `/usr/local/cuda/version.txt`内容 3. `nvidia-smi`完整输出
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值