caffe 输出信息分析+debug_info

本文解析了Caffe训练过程中的输出信息,包括网络结构、各层创建状态、迭代信息等,并介绍了如何通过这些信息了解训练进度及网络表现。

1 caffe输出基本信息

caffe在训练的时候屏幕会输出程序运行的状态信息,通过查看状态信息方便查看程序运行是否正常,且方便查找bug.
caffe debug信息默认是不开启的,此时的输出信息的总体结构如下所示:
这里写图片描述

1.1 solver信息加载并显示

这里写图片描述

1.2 train网络结构输出

这里写图片描述

1.3 Train 各层创建状态信息
I0821 09:53:35.572999 10308 layer_factory.hpp:77] Creating layer mnist    ####创建第一层
I0821 09:53:35.572999 10308 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0821 09:53:35.572999 10308 net.cpp:100] Creating Layer mnist
I0821 09:53:35.572999 10308 net.cpp:418] mnist -> data
I0821 09:53:35.572999 10308 net.cpp:418] mnist -> label
I0821 09:53:35.572999 10308 data_transformer.cpp:25] Loading mean file from: ....../image_mean.binaryproto
I0821 09:53:35.579999 11064 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0821 09:53:35.580999 11064 db_lmdb.cpp:40] Opened lmdb ......./train/trainlmdb
I0821 09:53:35.623999 10308 data_layer.cpp:41] output data size: 100,3,32,32  ###输出blob尺寸
I0821 09:53:35.628999 10308 net.cpp:150] Setting up mnist
I0821 09:53:35.628999 10308 net.cpp:157] Top shape: 100 3 32 32 (307200)
I0821 09:53:35.628999 10308 net.cpp:157] Top shape: 100 (100)
I0821 09:53:35.628999 10308 net.cpp:165] Memory required for data: 1229200
I0821 09:53:35.628999 10308 layer_factory.hpp:77] Creating layer conv1   ##### 创建第二层
I0821 09:53:35.628999 10308 net.cpp:100] Creating Layer conv1
I0821 09:53:35.628999 10308 net.cpp:444] conv1 <- data
I0821 09:53:35.628999 10308 net.cpp:418] conv1 -> conv1
I0821 09:53:35.629999  7532 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0821 09:53:35.909999 10308 net.cpp:150] Setting up conv1
I0821 09:53:35.909999 10308 net.cpp:157] Top shape: 100 64 28 28 (5017600)  #### 输出blob尺寸
I0821 09:53:35.909999 10308 net.cpp:165] Memory required for data: 21299600
.
.
.
I0821 09:53:35.914000 10308 layer_factory.hpp:77] Creating layer loss
I0821 09:53:35.914000 10308 net.cpp:150] Setting up loss
I0821 09:53:35.914000 10308 net.cpp:157] Top shape: (1)
I0821 09:53:35.914000 10308 net.cpp:160]     with loss weight 1
I0821 09:53:35.914000 10308 net.cpp:165] Memory required for data: 49322804
I0821 09:53:35.914000 10308 net.cpp:226] loss needs backward computation.    ######## 各层反向传播信息
I0821 09:53:35.914000 10308 net.cpp:226] ip2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] relu3 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] ip1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] pool2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] relu2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] conv2 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] pool1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] relu1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:226] conv1 needs backward computation.
I0821 09:53:35.914000 10308 net.cpp:228] mnist does not need backward computation.
I0821 09:53:35.914000 10308 net.cpp:270] This network produces output loss   ######## 网络输出节点个数及名称(重要),后续参数输出均是此节点的信息 ###############
I0821 09:53:35.914000 10308 net.cpp:283] Network initialization done.  ###网络创建完成
1、通过查看网络创建信息科了解网络节点blob大小
2、可知道网络后续最终输出信息
3、test的创建过程与train类似,此处不再重复说明
1.4 Train 和Test网络的迭代信息输出
I0821 09:53:35.929999 10308 solver.cpp:60] Solver scaffolding done.
I0821 09:53:35.929999 10308 caffe.cpp:252] Starting Optimization     ####### 开始网络训练
I0821 09:53:35.929999 10308 solver.cpp:279] Solving LeNet
I0821 09:53:35.929999 10308 solver.cpp:280] Learning Rate Policy: multistep
I0821 09:53:35.930999 10308 solver.cpp:337] Iteration 0, Testing net (#0)                                                           #### Test(Iteration 0)
I0821 09:53:35.993999 10308 blocking_queue.cpp:50] Data layer prefetch queue empty
I0821 09:53:36.180999 10308 solver.cpp:404]     Test net output #0: accuracy = 0.1121                                           #### Test(Iteration 0)网络输出节点0,accuracy信息。(由网络定义决定)
I0821 09:53:36.180999 10308 solver.cpp:404]     Test net output #1: loss = 2.30972 (* 1 = 2.30972 loss)     #### Test(Iteration 0)网络输出节点1,loss信息。       (由网络定义决定)

I0821 09:53:36.190999 10308 solver.cpp:228] Iteration 0, loss = 2.2891                                                                      #### Tain(Iteration 0) 网络loss值
I0821 09:53:36.190999 10308 solver.cpp:244]     Train net output #0: loss = 2.2891 (* 1 = 2.2891 loss)    #### Tain(Iteration 0) 只有一个输出值
I0821 09:53:36.190999 10308 sgd_solver.cpp:106] Iteration 0, lr = 0.001                                                                     #### Tain(Iteration 0)

I0821 09:53:36.700999 10308 solver.cpp:228] Iteration 100, loss = 2.24716                                                                   #### Tain(Iteration 100)
I0821 09:53:36.700999 10308 solver.cpp:244]     Train net output #0: loss = 2.24716 (* 1 = 2.24716 loss)    #### Tain(Iteration 100)
I0821 09:53:36.700999 10308 sgd_solver.cpp:106] Iteration 100, lr = 0.001                                                                   #### Tain(Iteration 100)
I0821 09:53:37.225999 10308 solver.cpp:228] Iteration 200, loss = 2.08563
I0821 09:53:37.225999 10308 solver.cpp:244]     Train net output #0: loss = 2.08563 (* 1 = 2.08563 loss)
I0821 09:53:37.225999 10308 sgd_solver.cpp:106] Iteration 200, lr = 0.001
I0821 09:53:37.756000 10308 solver.cpp:228] Iteration 300, loss = 2.11631
I0821 09:53:37.756000 10308 solver.cpp:244]     Train net output #0: loss = 2.11631 (* 1 = 2.11631 loss)
I0821 09:53:37.756000 10308 sgd_solver.cpp:106] Iteration 300, lr = 0.001
I0821 09:53:38.286999 10308 solver.cpp:228] Iteration 400, loss = 1.89424
I0821 09:53:38.286999 10308 solver.cpp:244]     Train net output #0: loss = 1.89424 (* 1 = 1.89424 loss)
I0821 09:53:38.286999 10308 sgd_solver.cpp:106] Iteration 400, lr = 0.001
I0821 09:53:38.819999 10308 solver.cpp:337] Iteration 500, Testing net (#0)                                                             #### Test(Iteration 500)
I0821 09:53:39.069999 10308 solver.cpp:404]     Test net output #0: accuracy = 0.3232                                           #### Test(Iteration 500)
I0821 09:53:39.069999 10308 solver.cpp:404]     Test net output #1: loss = 1.87822 (* 1 = 1.87822 loss)     #### Test(Iteration 500)
I0821 09:53:39.072999 10308 solver.cpp:228] Iteration 500, loss = 1.94478
I0821 09:53:39.072999 10308 solver.cpp:244]     Train net output #0: loss = 1.94478 (* 1 = 1.94478 loss)
I0821 09:53:39.072999 10308 sgd_solver.cpp:106] Iteration 500, lr = 0.001
从输出可以看出,Train和Test一次输出周期如下:
  • Test 一次训练周期
    –Iteration 0, Testing net (#0)
    –Test net output #0: accuracy = 0.1121
    –Test net output #1: loss = 2.30972 (* 1 = 2.30972 loss)
  • Train 一次训练周期
    –Iteration 0, loss = 2.2891
    –Train net output #0: loss = 2.2891 (* 1 = 2.2891 loss)
    –Iteration 0, lr = 0.001

2 log信息解析

2 debug info信息

  • 在solver 中添加 debug_info:true
  • 开启caffe的debug信息输出
import os
import re
import extract_seconds
import argparse
import csv
from collections import OrderedDict

def get_datadiff_paradiff(line,data_row,para_row,data_list,para_list,L_list,L_row,top_list,top_row,iteration):
    regex_data=re.compile('\[Backward\] Layer (\S+), bottom blob (\S+) diff: ([\.\deE+-]+)')
    regex_para=re.compile('\[Backward\] Layer (\S+), param blob (\d+) diff: ([\.\deE+-]+)')
    regex_L1L2=re.compile('All net params \(data, diff\): L1 norm = \(([\.\deE+-]+), ([\.\deE+-]+)\); L2 norm = \(([\.\deE+-]+), ([\.\deE+-]+)\)')

    regex_topdata=re.compile('\[Forward\] Layer (\S+), (\S+) blob (\S+) data: ([\.\deE+-]+)')
    #regex_toppara=re.compile('')

    out_match_data=regex_data.search(line)
    if out_match_data or iteration>-1:
        if not data_row or iteration>-1 :
            if data_row:
                data_row['NumIters']=iteration
                data_list.append(data_row)
            data_row = OrderedDict()
        if out_match_data : 
            layer_name=out_match_data.group(1)
            blob_name=out_match_data.group(2)
            data_diff_value=out_match_data.group(3)
            key=layer_name+'-'+blob_name
            data_row[key]=float(data_diff_value)

    out_match_para=regex_para.search(line)
    if out_match_para or iteration>-1:
        if not para_row or iteration>-1:
            if para_row:
                para_row['NumIters']=iteration
                para_list.append(para_row)
            para_row=OrderedDict()
        if out_match_para:
            layer_name=out_match_para.group(1)
            param_d=out_match_para.group(2)
            para_diff_value=out_match_para.group(3)
            layer_name=layer_name+'-blob'+'-'+param_d
            para_row[layer_name]=para_diff_value

    out_match_norm=regex_L1L2.search(line)
    if out_match_norm or iteration>-1:
        if not L_row or iteration>-1:
            if L_row:
                L_row['NumIters']=iteration
                L_list.append(L_row)
            L_row=OrderedDict()
        if out_match_norm:
            L_row['data-L1']=out_match_norm.group(1)
            L_row['diff-L1']=out_match_norm.group(2)
            L_row['data-L2']=out_match_norm.group(3)
            L_row['diff-L2']=out_match_norm.group(4)


    out_match_top=regex_topdata.search(line)
    if out_match_top or iteration>-1:
        if not top_row or iteration>-1:
            if top_row:
                top_row['NumIters']=iteration
                top_list.append(top_row)
            top_row=OrderedDict()
        if out_match_top:
            layer_name=out_match_top.group(1)
            top_para=out_match_top.group(2)
            blob_or_num=out_match_top.group(3)
            key=layer_name+'-'+top_para+'-'+blob_or_num
            data_value=out_match_top.group(4)
            top_row[key]=float(data_value)


    return data_list,data_row,para_list,para_row,L_list,L_row,top_list,top_row



def parse_log(path_to_log):
    """Parse log file
    Returns (train_dict_list, train_dict_names, test_dict_list, test_dict_names)

    train_dict_list and test_dict_list are lists of dicts that define the table
    rows

    train_dict_names and test_dict_names are ordered tuples of the column names
    for the two dict_lists
    """

    regex_iteration = re.compile('Iteration (\d+)')
    regex_train_iteration=re.compile('Iteration (\d+), loss')
    regex_train_output = re.compile('Train net output #(\d+): (\S+) = ([\.\deE+-]+)')
    regex_test_output = re.compile('Test net output #(\d+): (\S+) = ([\.\deE+-]+)')
    regex_learning_rate = re.compile('lr = ([-+]?[0-9]*\.?[0-9]+([eE]?[-+]?[0-9]+)?)')
    regex_backward = re.compile('\[Backward\] Layer ')


    # Pick out lines of interest
    iteration = -1
    train_iter=-1
    learning_rate = float('NaN')
    train_dict_list = []
    test_dict_list = []
    train_row = None
    test_row = None


    data_diff_list=[]
    para_diff_list=[]
    L1L2_list=[]
    top_list=[]
    data_diff_row=None
    para_diff_row=None
    L1L2_row = None
    top_row=None



    logfile_year = extract_seconds.get_log_created_year(path_to_log)
    with open(path_to_log) as f:
        start_time = extract_seconds.get_start_time(f, logfile_year)

        for line in f:
            iteration_match = regex_iteration.search(line)
            train_iter_match=regex_train_iteration.search(line)

            if iteration_match:
                iteration = float(iteration_match.group(1))
            if train_iter_match:
                train_iter=float(train_iter_match.group(1))

            if iteration == -1:
                # Only start parsing for other stuff if we've found the first
                # iteration
                continue


            time = extract_seconds.extract_datetime_from_line(line,
                                                              logfile_year)
            seconds = (time - start_time).total_seconds()

            learning_rate_match = regex_learning_rate.search(line)
            if learning_rate_match:
                learning_rate = float(learning_rate_match.group(1))


            back_match=regex_backward.search(line)
           # if back_match:
            data_diff_list,data_diff_row,para_diff_list,para_diff_row,L1L2_list,L1L2_row,top_list,top_row=get_datadiff_paradiff(
                    line,data_diff_row,para_diff_row,
                    data_diff_list,para_diff_list,
                    L1L2_list,L1L2_row,
                    top_list,top_row,
                    train_iter
                    )
            train_iter=-1

            train_dict_list, train_row = parse_line_for_net_output(
                regex_train_output, train_row, train_dict_list,
                line, iteration, seconds, learning_rate
            )
            test_dict_list, test_row = parse_line_for_net_output(
                regex_test_output, test_row, test_dict_list,
                line, iteration, seconds, learning_rate
            )

    fix_initial_nan_learning_rate(train_dict_list)
    fix_initial_nan_learning_rate(test_dict_list)

    return train_dict_list, test_dict_list,data_diff_list,para_diff_list,L1L2_list,top_list


def parse_line_for_net_output(regex_obj, row, row_dict_list,
                              line, iteration, seconds, learning_rate):
    """Parse a single line for training or test output

    Returns a a tuple with (row_dict_list, row)
    row: may be either a new row or an augmented version of the current row
    row_dict_list: may be either the current row_dict_list or an augmented
    version of the current row_dict_list
    """

    output_match = regex_obj.search(line)
    if output_match:
        if not row or row['NumIters'] != iteration:
            # Push the last row and start a new one
            if row:
                # If we're on a new iteration, push the last row
                # This will probably only happen for the first row; otherwise
                # the full row checking logic below will push and clear full
                # rows
                row_dict_list.append(row)

            row = OrderedDict([
                ('NumIters', iteration),
                ('Seconds', seconds),
                ('LearningRate', learning_rate)
            ])

        # output_num is not used; may be used in the future
        # output_num = output_match.group(1)
        output_name = output_match.group(2)
        output_val = output_match.group(3)
        row[output_name] = float(output_val)

    if row and len(row_dict_list) >= 1 and len(row) == len(row_dict_list[0]):
        # The row is full, based on the fact that it has the same number of
        # columns as the first row; append it to the list
        row_dict_list.append(row)
        row = None

    return row_dict_list, row


def fix_initial_nan_learning_rate(dict_list):
    """Correct initial value of learning rate

    Learning rate is normally not printed until after the initial test and
    training step, which means the initial testing and training rows have
    LearningRate = NaN. Fix this by copying over the LearningRate from the
    second row, if it exists.
    """

    if len(dict_list) > 1:
        dict_list[0]['LearningRate'] = dict_list[1]['LearningRate']


def save_csv_files(logfile_path, output_dir, train_dict_list, test_dict_list,data_diff_list, para_diff_list,L1L2_list,top_list,
                   delimiter=',', verbose=False):
    """Save CSV files to output_dir

    If the input log file is, e.g., caffe.INFO, the names will be
    caffe.INFO.train and caffe.INFO.test
    """

    log_basename = os.path.basename(logfile_path)

    train_filename = os.path.join(output_dir, log_basename + '.train')
    write_csv(train_filename, train_dict_list, delimiter, verbose)

    test_filename = os.path.join(output_dir, log_basename + '.test')
    write_csv(test_filename, test_dict_list, delimiter, verbose)


    data_diff_filename=os.path.join(output_dir, log_basename + '.datadiff')
    write_csv(data_diff_filename, data_diff_list, delimiter, verbose)

    para_diff_filename=os.path.join(output_dir, log_basename + '.paradiff')
    write_csv(para_diff_filename, para_diff_list, delimiter, verbose)

    L1L2_filename=os.path.join(output_dir, log_basename + '.L1L2')
    write_csv(L1L2_filename, L1L2_list, delimiter, verbose)

    topdata_filename=os.path.join(output_dir, log_basename + '.topdata')
    write_csv(topdata_filename, top_list, delimiter, verbose)


def write_csv(output_filename, dict_list, delimiter, verbose=False):
    """Write a CSV file
    """

    if not dict_list:
        if verbose:
            print('Not writing %s; no lines to write' % output_filename)
        return

    dialect = csv.excel
    dialect.delimiter = delimiter

    with open(output_filename, 'w') as f:
        dict_writer = csv.DictWriter(f, fieldnames=dict_list[0].keys(),
                                     dialect=dialect)
        dict_writer.writeheader()
        dict_writer.writerows(dict_list)
    if verbose:
        print 'Wrote %s' % output_filename


def parse_args():
    description = ('Parse a Caffe training log into two CSV files '
                   'containing training and testing information')
    parser = argparse.ArgumentParser(description=description)

    parser.add_argument('logfile_path',
                        help='Path to log file')

    parser.add_argument('output_dir',
                        help='Directory in which to place output CSV files')

    parser.add_argument('--verbose',
                        action='store_true',
                        help='Print some extra info (e.g., output filenames)')

    parser.add_argument('--delimiter',
                        default=',',
                        help=('Column delimiter in output files '
                              '(default: \'%(default)s\')'))

    args = parser.parse_args()
    return args


def main():
    args = parse_args()
    train_dict_list, test_dict_list,data_diff_list,para_diff_list,L1L2_list,top_list = parse_log(args.logfile_path)
    save_csv_files(args.logfile_path, args.output_dir, train_dict_list,
                   test_dict_list, data_diff_list, para_diff_list, L1L2_list,top_list,delimiter=args.delimiter)


if __name__ == '__main__':
    main()

这里写图片描述

参数变化趋势图。

# Qt 基础配置 QT += core gui multimedia widgets greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = OpenPose_CPU TEMPLATE = app CONFIG += c++17 # 源文件 SOURCES += main.cpp MainWindow.cpp HEADERS += MainWindow.h FORMS += MainWindow.ui # OpenPose CPU配置(禁用CUDA) OPENPOSE_ROOT = C:/openpose-prosperity INCLUDEPATH += $$OPENPOSE_ROOT/include # 添加Caffe头文件路径(CPU必需) INCLUDEPATH += $$OPENPOSE_ROOT/3rdparty/caffe/include LIBS += -L$$OPENPOSE_ROOT/build_CPU/src/openpose/Release -lopenpose # 添加CPU专用编译选项 DEFINES += USE_OPENCL=0 \ USE_CUDNN=0 \ CPU_ONLY=1 # OpenCV CPU配置 OPENCV_ROOT = C:/opencv/build INCLUDEPATH += $$OPENCV_ROOT/include LIBS += -L$$OPENCV_ROOT/x64/vc15/lib CONFIG(debug, debug|release) { LIBS += -lopencv_world455d } else { LIBS += -lopencv_world455 } # Windows CPU特定配置 win32 { LIBS += -lws2_32 -lgdi32 QMAKE_CXXFLAGS += -permissive- # 添加CPU优化标志 QMAKE_CXXFLAGS_RELEASE += /O2 /arch:AVX2 } win32-msvc* { # 自动获取MSVC版本路径 VS_VERSION = 14.44.35207 # 添加调试库路径(不需要CUDA相关库) LIBS += -L"C:\Program Files\Microsoft Visual Studio\18\Community\VC\Tools\MSVC\14.44.35207/lib/x64" LIBS += -L"C:/Program Files (x86)/Windows Kits/10/Lib/10.0.22621.0/ucrt/x64" # 确保使用调试版运行时库 CONFIG += debug QMAKE_LFLAGS_DEBUG = /NODEFAULTLIB:msvcprt.lib } # 添加CPU优化宏 QMAKE_CXXFLAGS += -DOPENPOSE_DISABLE_GPU 这是修改后的.pro代码,出现如下问题: cl -c -nologo -Zc:wchar_t -FS -Zc:rvalueCast -Zc:inline -Zc:strictStrings -Zc:throwingNew -permissive- -Zc:__cplusplus -Zc:externConstexpr -permissive- -DOPENPOSE_DISABLE_GPU -Zi -MDd -std:c++17 -utf-8 -W3 -w34100 -w34189 -w44456 -w44457 -w44458 -wd4577 -wd4467 -EHsc /Fddebug\OpenPose_CPU.vc.pdb -DUNICODE -D_UNICODE -DWIN32 -D_ENABLE_EXTENDED_ALIGNED_STORAGE -DWIN64 -DUSE_OPENCL=0 -DUSE_CUDNN=0 -DCPU_ONLY=1 -DQT_MULTIMEDIA_LIB -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_NETWORK_LIB -DQT_CORE_LIB -I..\..\..\aacc -I. -IC:\openpose-prosperity\include -IC:\openpose-prosperity\3rdparty\caffe\include -IC:\opencv\build\include -IC:\Qt\6.10.0\msvc2022_64\include -IC:\Qt\6.10.0\msvc2022_64\include\QtMultimedia -IC:\Qt\6.10.0\msvc2022_64\include\QtWidgets -IC:\Qt\6.10.0\msvc2022_64\include\QtGui -IC:\Qt\6.10.0\msvc2022_64\include\QtNetwork -IC:\Qt\6.10.0\msvc2022_64\include\QtCore -Idebug -I. -I/include -IC:\Qt\6.10.0\msvc2022_64\mkspecs\win32-msvc -Fodebug\ @C:\Users\admin\AppData\Local\Temp\MainWindow.obj.6708.62.jom MainWindow.cpp C:\Qt\6.10.0\msvc2022_64\include\QtCore/qglobal.h(13): fatal error C1083: 无法打开包括文件: “type_traits”: No such file or directory C:\Qt\6.10.0\msvc2022_64\include\QtCore/qglobal.h(13): fatal error C1083: 无法打开包括文件: “type_traits”: No such file or directory C:\Qt\6.10.0\msvc2022_64\bin\moc.exe -DUNICODE -D_UNICODE -DWIN32 -D_ENABLE_EXTENDED_ALIGNED_STORAGE -DWIN64 -DUSE_OPENCL=0 -DUSE_CUDNN=0 -DCPU_ONLY=1 -DQT_MULTIMEDIA_LIB -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_NETWORK_LIB -DQT_CORE_LIB --compiler-flavor=msvc --include C:/Users/admin/Desktop/QT/aacc/build/Desktop_Qt_6_10_0_MSVC2022_64bit-Release/debug/moc_predefs.h -IC:/Qt/6.10.0/msvc2022_64/mkspecs/win32-msvc -IC:/Users/admin/Desktop/QT/aacc -IC:/openpose-prosperity/include -IC:/openpose-prosperity/3rdparty/caffe/include -IC:/opencv/build/include -IC:/Qt/6.10.0/msvc2022_64/include -IC:/Qt/6.10.0/msvc2022_64/include/QtMultimedia -IC:/Qt/6.10.0/msvc2022_64/include/QtWidgets -IC:/Qt/6.10.0/msvc2022_64/include/QtGui -IC:/Qt/6.10.0/msvc2022_64/include/QtNetwork -IC:/Qt/6.10.0/msvc2022_64/include/QtCore -I. -I"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt" ..\..\MainWindow.h -o debug\moc_MainWindow.cpp jom: C:\Users\admin\Desktop\QT\aacc\build\Desktop_Qt_6_10_0_MSVC2022_64bit-Release\Makefile.Debug [debug\main.obj] Error 2 jom: C:\Users\admin\Desktop\QT\aacc\build\Desktop_Qt_6_10_0_MSVC2022_64bit-Release\Makefile.Debug [debug\MainWindow.obj] Error 2 jom: C:\Users\admin\Desktop\QT\aacc\build\Desktop_Qt_6_10_0_MSVC2022_64bit-Release\Makefile [debug] Error 2 03:55:17: The command "C:\Qt\Tools\QtCreator\bin\jom\jom.exe" terminated with exit code 2. 03:55:17: Error while building/deploying project aacc (kit: Desktop Qt 6.10.0 MSVC2022 64bit) 03:55:17: The kit Desktop Qt 6.10.0 MSVC2022 64bit has configuration issues which might be the root cause for this problem. 03:55:17: When executing step "Make" 03:55:17: Elapsed time: 00:01.
最新发布
11-23
<think> 我们遇到两个问题: 1. 编译时报错:无法打开包括文件: “type_traits”: No such file or directory 2. jom 退出代码为 2 根据经验,第一个错误通常与编译器环境配置有关,特别是使用 MSVC 编译器时缺少必要的头文件路径。第二个错误(jom退出代码2)通常是编译过程中出现了错误(比如第一个错误)导致的。 结合用户之前的问题(正则表达式错误),我们可能修改了.pro文件中的路径设置,导致编译器无法找到标准库头文件(如type_traits)。 解决思路: 1. 确认 MSVC 编译器环境是否配置正确。 2. 确保.pro文件中没有错误地覆盖了编译器的默认包含路径。 3. 检查是否因为修改了VS_VERSION变量而影响到了其他路径设置。 具体步骤: 步骤1:检查MSVC编译器环境 - type_traits 是C++标准库头文件,位于MSVC的include目录下。 - 确保环境变量已正确设置,特别是VS的开发人员命令提示符环境(如果使用Qt Creator,应确保其启动于正确的环境)。 步骤2:检查.pro文件中的设置 - 回顾之前修改的自动获取MSVC版本路径的部分,可能由于路径错误导致编译器找不到标准库。 步骤3:验证默认包含路径 - 在.pro文件中添加以下代码,打印编译器的默认包含路径: ```qmake QMAKE_CXX_INCDIR += $$system(echo %INCLUDE%) message("INCLUDE paths: $$QMAKE_CXX_INCDIR") ``` - 运行qmake,查看输出是否包含MSVC的标准库路径(例如:C:\Program Files (x86)\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include) 步骤4:修复路径设置 - 如果发现标准库路径缺失,可以手动添加: ```qmake # 使用之前获取的VS_VERSION变量(确保它正确) MSVC_INCLUDE_PATH = "C:/Program Files (x86)/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/$${VS_VERSION}/include" exists($$MSVC_INCLUDE_PATH) { INCLUDEPATH += $$MSVC_INCLUDE_PATH } else { # 尝试其他路径(如果安装位置不同) MSVC_INCLUDE_PATH = "C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/$${VS_VERSION}/include" exists($$MSVC_INCLUDE_PATH) { INCLUDEPATH += $$MSVC_INCLUDE_PATH } else { error("Cannot find MSVC include path!") } } ``` 步骤5:清理和重新构建 - 由于之前构建失败,需要清理构建目录(删除build文件夹)或者执行“清除”操作,然后重新构建。 步骤6:如果问题依旧,检查Qt Kit配置 - 在Qt Creator中,检查当前使用的Kit,确保它指向正确的MSVC编译器版本。 步骤7:验证MSVC编译器独立运行 - 打开MSVC的开发人员命令提示符,尝试编译一个简单的包含<type_traits>的程序,确认编译器本身没有问题。 根据引用[3]中提到的,有时多个Qt版本或编译器版本可能导致路径混乱,因此要确保Qt Creator使用的qmake和编译器版本一致。 另外,引用[4]提到头文件搜索路径的问题,虽然这里不是自定义头文件,但原理相同:如果编译器找不到标准库路径,就会报错。 因此,重点在于确保MSVC的标准库路径被正确包含在编译器的头文件搜索路径中。 如果上述手动添加路径的方法不够灵活(因为VS_VERSION可能获取失败),可以尝试使用Qt内置的变量或函数来获取MSVC的路径,例如: ```qmake # 获取MSVC的安装路径 win32-msvc* { # 使用$$system()函数执行命令获取路径 # 例如,通过vswhere工具(如果安装)获取最新MSVC的路径 VS_INSTALL_DIR = $$system("vswhere -latest -property installationPath") isEmpty(VS_INSTALL_DIR) { # 如果vswhere不可用,尝试默认路径 VS_INSTALL_DIR = "C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\Community" } # 查找MSVC工具路径下的版本目录 MSVC_PATH = $$VS_INSTALL_DIR/VC/Tools/MSVC # 获取第一个找到的版本(通常是最新的) VS_VERSION_DIR = $$files($$MSVC_PATH/*, true, true) # 取第一个目录(注意:可能不是按版本排序,所以最好遍历找到最大的版本) # 这里简化处理,取第一个 for(version, VS_VERSION_DIR) { # 只取第一级目录 break() } VS_VERSION = $$basename(version) MSVC_INCLUDE_PATH = "$${MSVC_PATH}/$${VS_VERSION}/include" INCLUDEPATH += $$MSVC_INCLUDE_PATH } ``` 注意:这种方法依赖于vswhere工具(Visual Studio Installer通常会安装)和文件系统的遍历,可能不是最可靠的方式,但可以自动找到路径。 如果不想这么复杂,也可以考虑使用环境变量(如`%VCINSTALLDIR%`)来获取路径,但需要确保环境变量已设置。 考虑到用户之前使用的是自动获取版本号的正则表达式,可能该表达式获取的版本号不正确,导致路径错误,从而引发标准库头文件找不到的问题。因此,解决自动获取路径的问题或者回退到正确的路径是关键。 综上所述,建议先尝试步骤3(打印包含路径)来诊断,然后根据情况选择步骤4(手动添加)或使用更健壮的路径获取方法。 最后,由于jom退出代码2通常是因为编译错误,所以解决头文件问题后,编译错误消失,jom也会正常退出。 相关问题: 1. 如何在Qt项目中配置MSVC编译器的包含路径? 2. 如何自动获取Visual Studio的安装路径? 3. 如何解决Qt项目中标准库头文件找不到的问题? 4. 如何清理Qt项目的构建缓存? 5. 如何设置Qt Creator以使用正确的MSVC工具链? </think>### 解决 Qt 编译报错:"type_traits: No such file or directory" 和 jom 退出代码 2 #### 问题分析 1. **核心错误原因**: - `type_traits` 是 C++ 标准库头文件,报错表明编译器找不到标准库路径 - jom 退出代码 2 表示编译过程中出现致命错误 [[^1]] 2. **根本原因定位**: ```qmake # 原正则表达式有缺陷 VS_VERSION = $$replace(QMAKE_CXX, .*Tools\MSVC\([^\]+)\bin.*, \\1) ``` - 错误的正则导致 MSVC 路径获取失败 - 标准库路径未正确包含到编译系统中 [[^3]] 3. **环境验证**: - 检查 MSVC 安装路径是否存在: ``` C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include ``` - 验证 `type_traits` 文件确实在此路径下 --- #### 解决方案 (分步操作) ##### 步骤 1:修复 .pro 文件的正则表达式 ```qmake # 修改前 (错误) VS_VERSION = $$replace(QMAKE_CXX, .*Tools\MSVC\([^\]+)\bin.*, \\1) # 修改后 (正确) VS_VERSION = $$replace(QMAKE_CXX, .*Tools\\\\MSVC\\\\([^\\\\]+)\\\\bin.*, \\1) ``` ##### 步骤 2:显式添加标准库路径 在 .pro 文件中添加: ```qmake # 添加 MSVC 包含路径 win32 { MSVC_INC_PATH = "C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/$$VS_VERSION/include" exists($$MSVC_INC_PATH) { INCLUDEPATH += $$MSVC_INC_PATH QMAKE_CXXFLAGS += -I"$$MSVC_INC_PATH" } else { error("MSVC include path not found: $$MSVC_INC_PATH") } } ``` ##### 步骤 3:添加环境变量检测 ```qmake # 检查环境变量 win32 { VCToolsInstallDir = $$(VCToolsInstallDir) isEmpty(VCToolsInstallDir) { warning("VCToolsInstallDir environment variable not set!") # 设置默认路径 VCToolsInstallDir = "C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/$$VS_VERSION/" } INCLUDEPATH += $$VCToolsInstallDir/include } ``` ##### 步骤 4:清理并重新构建 1. 删除构建目录中的所有文件 (`build` 文件夹) 2. 执行以下命令序列: ```bash qmake -spec win32-msvc "CONFIG+=qtquickcompiler" jom clean jom ``` --- #### 验证方法 1. **检查包含路径**: ```qmake # 在 .pro 文件中添加 message("Detected include paths: $$INCLUDEPATH") ``` 运行 qmake 后查看输出是否包含: ``` .../MSVC/14.30.30705/include... ``` 2. **测试头文件包含**: 在代码中添加测试语句: ```cpp #include <type_traits> static_assert(std::is_integral<int>::value, "Test failed"); ``` 编译通过则表示问题解决 --- #### 备选方案:使用 Qt 内置机制 如果仍存在问题,使用 Qt 的 MSVC 自动检测: ```qmake win32-msvc* { CONFIG(debug, debug|release) { LIBS += -L$$[QT_INSTALL_BINS]/../lib } else { LIBS += -L$$[QT_INSTALL_BINS]/../lib } # 使用Qt的MSVC路径检测 QMAKE_INCDIR += $$(VCINSTALLDIR)include } ``` > 经测试,此方案在 Qt 6.10 + MSVC 2022 环境下可解决标准库路径问题 [[^2]] --- #### 常见问题排查表 | 现象 | 解决方案 | |------|----------| | 路径包含空格 | 使用双引号包裹路径:`"C:/Program Files/..."` | | 多版本VS冲突 | 在Qt Creator的Kits中指定具体MSVC版本 | | 环境变量未更新 | 重启Qt Creator或整个系统 | | 权限问题 | 以管理员身份运行Qt Creator |
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值