C/C++:pthread_cond_timedwait阻塞失败(立刻超时返回)

探讨了pthread_cond_timedwait在特定情况下立即超时返回的问题,通过更改时钟源为CLOCK_MONOTONIC解决了阻塞失败现象。

C/C++:pthread_cond_timedwait阻塞失败(立刻超时返回)

前几天在现网部署软件时,发现一个进程占用CPU非常非常高,仔细探查原因,发现是处理消息时pthread_cond_timedwait阻塞失败,或者说,没有到达预定的时间就已经超时返回。

代码示例如下:

#include <iostream>
#include <pthread.h>
#include <sys/time.h>

using namespace std;

class Ebupt
{
public:
    Ebupt();
    virtual ~Ebupt();
    void dealMsg(long wait_ns);
private:
    pthread_mutex_t mutex;
    pthread_cond_t cond;
};

Ebupt::Ebupt()
{
    pthread_mutex_init(&mutex, NULL);
    pthread_cond_init(&cond, NULL);
}

Ebupt::~Ebupt()
{
    pthread_mutex_destroy(&mutex);
    pthread_cond_destroy(&cond);
}

void Ebupt::dealMsg(long wait_ns)
{
    pthread_mutex_lock(&mutex);

    struct timeval now;
    gettimeofday(&now, NULL);
    struct timespec abstime;

    if (now.tv_usec*1000 + (wait_ns%1000000000) >= 1000000000)
    {
        abstime.tv_sec = now.tv_sec + wait_ns/1000000000 + 1;
        abstime.tv_nsec = (now.tv_usec*1000 + wait_ns%1000000000)%1000000000;
    }
    else
    {
        abstime.tv_sec = now.tv_sec + wait_ns/1000000000;
        abstime.tv_nsec = now.tv_usec*1000 + wait_ns%1000000000;
    }

    pthread_cond_timedwait(&cond, &mutex, &abstime);
    pthread_mutex_unlock(&mutex);
}

int main()
{
    Ebupt e;
    struct timeval now;
    while (true)
    {
        gettimeofday(&now, NULL);
        cout<<"++"<<now.tv_sec<<":"<<now.tv_usec<<endl;
        e.dealMsg(200000000);
        gettimeofday(&now, NULL);
        cout<<"--"<<now.tv_sec<<":"<<now.tv_usec<<endl;
    }
    return 0;
}

编译及输出如下:

[ismp@cn3 20171026]$ g++ -o main main.C -lpthread
[ismp@cn3 20171026]$ ./main
++1509023506:721641
--1509023506:721706
++1509023506:721710
--1509023506:721716
++1509023506:721718
--1509023506:721724
++1509023506:721726
--1509023506:721731
++1509023506:721733
--1509023506:721739
++1509023506:721741
--1509023506:721750
++1509023506:721753
--1509023506:721761
++1509023506:721763
--1509023506:721769
……
(CTRL+C)

理论上,我没有signal,那么应该阻塞200ms,再从阻塞中超时返回,但实际上,并没有阻塞,而是如同脱缰的野马,直接超时返回,由于dealMsg还是在一个while循环中,就如同死循环一般,CPU高当然很正常。

top看下嘞:

top - 21:15:52 up 419 days,  7:30,  2 users,  load average: 9.57, 8.94, 8.32
Tasks: 241 total,   3 running, 238 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.6%us, 63.1%sy,  0.0%ni, 24.6%id,  0.0%wa,  0.0%hi,  1.6%si,  0.0%st
Mem:  32879016k total, 32578784k used,   300232k free,   217448k buffers
Swap:  2097144k total,   749020k used,  1348124k free, 28921976k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                       
22096 ismp      20   0 13904 1116  956 S  3.7  0.0   0:01.84 main                                                                                                           
20409 ismp      20   0  109m 1956 1556 S  0.0  0.0   0:00.02 bash

就是个无阻塞死循环…

这个简单的示例还好,CPU飙到了4%不到,但是我那个进程直接飙到了70%多…

后来找了诸多问题,曾经想过,是不是gettimeofday使用的时钟和pthread_cond_timedwait实际使用的时钟不是同一个?

那我改改试试,如下:

#include <iostream>
#include <time.h>
#include <pthread.h>
#include <sys/time.h>

using namespace std;

class Ebupt
{
public:
    Ebupt();
    virtual ~Ebupt();
    void dealMsg(long wait_ns);
private:
    pthread_mutex_t mutex;
    pthread_cond_t cond;
};

Ebupt::Ebupt()
{
    pthread_mutex_init(&mutex, NULL);
    pthread_cond_init(&cond, NULL);
}

Ebupt::~Ebupt()
{
    pthread_mutex_destroy(&mutex);
    pthread_cond_destroy(&cond);
}

void Ebupt::dealMsg(long wait_ns)
{
    pthread_mutex_lock(&mutex);

    struct timespec now;
    clock_gettime(CLOCK_REALTIME, &now);
    struct timespec abstime;

    if (now.tv_nsec + (wait_ns%1000000000) >= 1000000000)
    {
        abstime.tv_sec = now.tv_sec + wait_ns/1000000000 + 1;
        abstime.tv_nsec = (now.tv_nsec + wait_ns%1000000000)%1000000000;
    }
    else
    {
        abstime.tv_sec = now.tv_sec + wait_ns/1000000000;
        abstime.tv_nsec = now.tv_nsec + wait_ns%1000000000;
    }

    pthread_cond_timedwait(&cond, &mutex, &abstime);
    pthread_mutex_unlock(&mutex);
}

int main()
{
    Ebupt e;
    struct timeval now;
    while (true)
    {
        gettimeofday(&now, NULL);
        cout<<"++"<<now.tv_sec<<":"<<now.tv_usec<<endl;
        e.dealMsg(200000000);
        gettimeofday(&now, NULL);
        cout<<"--"<<now.tv_sec<<":"<<now.tv_usec<<endl;
    }
    return 0;
}
[ismp@cn3 20171026]$ g++ -o main main.C -lpthread -lrt
[ismp@cn3 20171026]$ ./main
++1509024234:822675
--1509024234:822733
++1509024234:822737
--1509024234:822748
++1509024234:822751
--1509024234:822761
……
(CTRL+C)

还是没有阻塞,看来并不是那个(gettimeofday和pthread_cond_timedwait使用的时钟不是同一个)原因。

如果我给条件变量加上属性试试?如下:

#include <iostream>
#include <time.h>
#include <pthread.h>
#include <sys/time.h>

using namespace std;

class Ebupt
{
……
Ebupt::Ebupt()
{
    pthread_mutex_init(&mutex, NULL);

    pthread_condattr_t condattr;
    pthread_condattr_init(&condattr);
    pthread_condattr_setclock(&condattr, CLOCK_REALTIME);
    pthread_cond_init(&cond, &condattr);
    pthread_condattr_destroy(&condattr);
}
……(同上)
[ismp@cn3 20171026]$ g++ -o main main.C -lpthread -lrt
[ismp@cn3 20171026]$ ./main
++1509024510:358162
--1509024510:358221
++1509024510:358225
--1509024510:358236
++1509024510:358239
--1509024510:358249
……
(CTRL+C)

后来无意中发现,解决这个问题可以换个时钟,使用MONOTONIC这个时钟

#include <iostream>
#include <time.h>
#include <pthread.h>
#include <sys/time.h>

using namespace std;

class Ebupt
{

……

Ebupt::Ebupt()
{
    pthread_mutex_init(&mutex, NULL);

    pthread_condattr_t condattr;
    pthread_condattr_init(&condattr);
    pthread_condattr_setclock(&condattr, CLOCK_MONOTONIC);
    pthread_cond_init(&cond, &condattr);
    pthread_condattr_destroy(&condattr);
}

……

void Ebupt::dealMsg(long wait_ns)
{
    pthread_mutex_lock(&mutex);

    struct timespec now;
    clock_gettime(CLOCK_MONOTONIC, &now);
    struct timespec abstime;

    if (now.tv_nsec + (wait_ns%1000000000) >= 1000000000)
    {
        abstime.tv_sec = now.tv_sec + wait_ns/1000000000 + 1;
        abstime.tv_nsec = (now.tv_nsec + wait_ns%1000000000)%1000000000;
    }
    else
    {
        abstime.tv_sec = now.tv_sec + wait_ns/1000000000;
        abstime.tv_nsec = now.tv_nsec + wait_ns%1000000000;
    }

    pthread_cond_timedwait(&cond, &mutex, &abstime);
    pthread_mutex_unlock(&mutex);
}

……
[ismp@cn3 20171026]$ g++ -o main main.C -lpthread -lrt
[ismp@cn3 20171026]$ ./main
++1509024798:440277
--1509024798:640389
++1509024798:640400
--1509024798:840413
++1509024798:840424
--1509024799:40507
++1509024799:40517
--1509024799:240565
++1509024799:240581
--1509024799:440595
(CTRL+C)

也就是说,最后解决办法是:

给条件变量设置时钟,使用MONOTONIC,而不使用REALTIME。

MONOTONIC使用的是jiffies变量来计算时间,是一个单调递增的时间,代表boot当前机器的时间,在boot后jiffies初始化为0;

REALTIME使用的是xtime,而这个xtime是在boot后从主板上的硬件时钟(RTC)读取的,运行时刻也会受到特权用户(例如root)使用类似date的命令影响;例如你设定在1h后超时,但是如果在这个阻塞的时间窗口中,你使用date命令将系统时间(或者叫做wall time)调整到1h之后,那么阻塞的语句会立刻超时返回,一如我们的pthread_cond_timedwait。

其实到最后也没有找出到底是什么原因导致的pthread_cond_timedwait阻塞失败,只是偶然间得出的临时的解决办法,后续有时间再研究为何pthread_cond_timedwait阻塞失败吧…

后记:

发现现网的进程的CPU占比都有点不太正常:

top - 21:45:30 up 419 days,  8:00,  1 user,  load average: 8.85, 8.37, 8.38
Tasks: 238 total,   4 running, 234 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.4%us, 65.4%sy,  0.0%ni, 23.5%id,  0.0%wa,  0.0%hi,  0.7%si,  0.0%st
Mem:  32879016k total, 32650184k used,   228832k free,   218716k buffers
Swap:  2097144k total,   749020k used,  1348124k free, 28992212k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                       
12303 sdc       20   0 3517m 864m 7176 S 344.1  2.7  5125261h java                                                                                                          
    9 root      20   0     0    0    0 S 50.5  0.0 144940:15 ksoftirqd/1                                                                                                    
   13 root      20   0     0    0    0 R 48.9  0.0 157182:21 ksoftirqd/2                                                                                                    
    4 root      20   0     0    0    0 R 46.9  0.0 153791:36 ksoftirqd/0                                                                                                    
   33 root      20   0     0    0    0 S 46.5  0.0 148379:24 ksoftirqd/7                                                                                                    
   21 root      20   0     0    0    0 R 44.2  0.0 156277:16 ksoftirqd/4                                                                                                    
   29 root      20   0     0    0    0 S 43.2  0.0 154775:19 ksoftirqd/6                                                                                                    
   17 root      20   0     0    0    0 S 30.9  0.0 174973:53 ksoftirqd/3                                                                                                    
   25 root      20   0     0    0    0 S 10.0  0.0 156328:27 ksoftirqd/5                                                                                                    
27888 www       20   0  177m 121m 1900 S  1.3  0.4   1167:11 nginx                                                                                                          
   41 root      20   0     0    0    0 S  0.3  0.0  17:36.77 events/6                                                                                                       
21937 sdc       20   0  134m 7564 1136 S  0.3  0.0  57:06.09 redis-server                                                                                                   
24218 ismp      20   0 15164 1344  944 R  0.3  0.0   0:00.01 top                                                                                                            
27890 www       20   0  180m 124m 1900 S  0.3  0.4   1163:55 nginx                                                                                                          
27891 www       20   0  170m 114m 1912 S  0.3  0.4   1069:01 nginx                                                                                                          
    1 root      20   0 19348  852  544 S  0.0  0.0   0:01.41 init                                                                                                           
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd                                                                                                       
    3 root      RT   0     0    0    0 S  0.0  0.0   1:02.86 migration/0                                                                                                    
    5 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0     

尤其是java后台进程和ksoftirqd。

我猜测java是不是也是底层使用了条件变量结果没有阻塞住?

后来的后来…重启了一下现网的机器,各个进程占用的CPU就降下来了,然后也不会再出现上面阻塞失败的问题了……

如果有小伙伴曾经有见过这个问题,欢迎指教哈,嘿嘿~

Thread 1 "buffer_pool_man" received signal SIGTSTP, Stopped (user). futex_wait (private=0, expected=2, futex_word=0x5555556c0a90) at ../sysdeps/nptl/futex-internal.h:146 warning: 146 ../sysdeps/nptl/futex-internal.h: No such file or directory (gdb) thread apply all bt Thread 13 (Thread 0x7ffff6ffe6c0 (LWP 92375) "buffer_pool_man"): #0 __futex_abstimed_wait_common (cancel=false, private=<optimized out>, abstime=0x0, clockid=0, expected=3, futex_word=0x5555556cb5a4) at ./nptl/futex-internal.c:103 #1 __GI___futex_abstimed_wait64 (futex_word=futex_word@entry=0x5555556cb5a4, expected=expected@entry=3, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>) at ./nptl/futex-internal.c:128 #2 0x00007ffff78a333b in __pthread_rwlock_wrlock_full64 (abstime=0x0, clockid=0, rwlock=0x5555556cb598) at ./nptl/pthread_rwlock_common.c:730 #3 ___pthread_rwlock_wrlock (rwlock=0x5555556cb598) at ./nptl/pthread_rwlock_wrlock.c:26 #4 0x0000555555593573 in std::__glibcxx_rwlock_wrlock (__rwlock=0x5555556cb598) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/shared_mutex:83 --Type <RET> for more, q to quit, c to continue without paging--c #5 0x00005555555954d5 in std::__shared_mutex_pthread::lock (this=0x5555556cb598) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/shared_mutex:196 #6 0x0000555555594575 in std::shared_mutex::lock (this=0x5555556cb598) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/shared_mutex:423 #7 0x0000555555590f1c in bustub::BufferPoolManager::CheckedWritePage (this=0x5555556c5a20, page_id=1, access_type=bustub::AccessType::Unknown) at /home/relic/cmu155452024/CMU15443-2024/src/buffer/buffer_pool_manager.cpp:222 #8 0x00005555555925b5 in bustub::BufferPoolManager::WritePage (this=0x5555556c5a20, page_id=1, access_type=bustub::AccessType::Unknown) at /home/relic/cmu155452024/CMU15443-2024/src/buffer/buffer_pool_manager.cpp:362 #9 0x000055555557043b in bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5::operator()() const (this=0x5555556c5f98) at /home/relic/cmu155452024/CMU15443-2024/test/buffer/buffer_pool_manager_test.cpp:335 #10 0x00005555555703dd in std::__invoke_impl<void, bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5>(std::__invoke_other, bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5&&) (__f=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/invoke.h:61 #11 0x000055555557036d in std::__invoke<bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5>(bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5&&) (__fn=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/invoke.h:96 #12 0x0000555555570345 in std::thread::_Invoker<std::tuple<bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5> >::_M_invoke<0ul>(std::_Index_tuple<0ul>) (this=0x5555556c5f98) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/std_thread.h:292 #13 0x0000555555570315 in std::thread::_Invoker<std::tuple<bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5> >::operator()() (this=0x5555556c5f98) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/std_thread.h:299 #14 0x0000555555570229 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody()::$_5> > >::_M_run() (this=0x5555556c5f90) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/std_thread.h:244 #15 0x00007ffff7cecdb4 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6 #16 0x00007ffff789caa4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:447 #17 0x00007ffff7929c6c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 Thread 12 (Thread 0x7ffff77ff6c0 (LWP 92374) "buffer_pool_man"): #0 0x00007ffff7898d71 in __futex_abstimed_wait_common64 (private=-1, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5555556c5d48) at ./nptl/futex-internal.c:57 #1 __futex_abstimed_wait_common (cancel=true, private=-1, abstime=0x0, clockid=0, expected=0, futex_word=0x5555556c5d48) at ./nptl/futex-internal.c:87 #2 __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556c5d48, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139 #3 0x00007ffff789b7ed in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556c5cf8, cond=0x5555556c5d20) at ./nptl/pthread_cond_wait.c:503 #4 ___pthread_cond_wait (cond=0x5555556c5d20, mutex=0x5555556c5cf8) at ./nptl/pthread_cond_wait.c:627 #5 0x00005555555ae691 in std::condition_variable::wait<bustub::Channel<std::optional<bustub::DiskRequest> >::Get()::{lambda()#1}>(std::unique_lock<std::mutex>&, bustub::Channel<std::optional<bustub::DiskRequest> >::Get()::{lambda()#1}) (this=0x5555556c5d20, __lock=..., __p=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/condition_variable:105 #6 0x00005555555ac48c in bustub::Channel<std::optional<bustub::DiskRequest> >::Get (this=0x5555556c5cf8) at /home/relic/cmu155452024/CMU15443-2024/src/include/common/channel.h:48 #7 0x00005555555aba6e in bustub::DiskScheduler::StartWorkerThread (this=0x5555556c5cf0) at /home/relic/cmu155452024/CMU15443-2024/src/storage/disk/disk_scheduler.cpp:41 #8 0x00005555555ac0c8 in bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0::operator()() const (this=0x7fffe4000b78) at /home/relic/cmu155452024/CMU15443-2024/src/storage/disk/disk_scheduler.cpp:22 #9 0x00005555555ac09d in std::__invoke_impl<void, bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0>(std::__invoke_other, bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0&&) (__f=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/invoke.h:61 #10 0x00005555555ac02d in std::__invoke<bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0>(bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0&&) (__fn=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/invoke.h:96 #11 0x00005555555ac005 in std::thread::_Invoker<std::tuple<bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0> >::_M_invoke<0ul>(std::_Index_tuple<0ul>) (this=0x7fffe4000b78) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/std_thread.h:292 #12 0x00005555555abfd5 in std::thread::_Invoker<std::tuple<bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0> >::operator()() (this=0x7fffe4000b78) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/std_thread.h:299 #13 0x00005555555abef9 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<bustub::DiskScheduler::DiskScheduler(bustub::DiskManager*)::$_0> > >::_M_run() (this=0x7fffe4000b70) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/std_thread.h:244 #14 0x00007ffff7cecdb4 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6 #15 0x00007ffff789caa4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:447 #16 0x00007ffff7929c6c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 Thread 1 (Thread 0x7ffff7ebf740 (LWP 92142) "buffer_pool_man"): #0 futex_wait (private=0, expected=2, futex_word=0x5555556c0a90) at ../sysdeps/nptl/futex-internal.h:146 #1 __GI___lll_lock_wait (futex=futex@entry=0x5555556c0a90, private=0) at ./nptl/lowlevellock.c:49 #2 0x00007ffff78a0101 in lll_mutex_lock_optimized (mutex=0x5555556c0a90) at ./nptl/pthread_mutex_lock.c:48 #3 ___pthread_mutex_lock (mutex=0x5555556c0a90) at ./nptl/pthread_mutex_lock.c:93 #4 0x000055555556eb63 in __gthread_mutex_lock (__mutex=0x5555556c0a90) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/x86_64-linux-gnu/c++/13/bits/gthr-default.h:749 #5 0x0000555555573da5 in std::mutex::lock (this=0x5555556c0a90) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/std_mutex.h:113 #6 0x00005555555939f3 in std::scoped_lock<std::mutex>::scoped_lock (this=0x7fffffffcf60, __m=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/mutex:779 #7 0x0000555555590d1a in bustub::BufferPoolManager::CheckedWritePage (this=0x5555556c5a20, page_id=2, access_type=bustub::AccessType::Unknown) at /home/relic/cmu155452024/CMU15443-2024/src/buffer/buffer_pool_manager.cpp:212 #8 0x00005555555925b5 in bustub::BufferPoolManager::WritePage (this=0x5555556c5a20, page_id=2, access_type=bustub::AccessType::Unknown) at /home/relic/cmu155452024/CMU15443-2024/src/buffer/buffer_pool_manager.cpp:362 #9 0x000055555556e4a2 in bustub::BufferPoolManagerTest_DeadlockTest_Test::TestBody (this=0x7fffec000ed0) at /home/relic/cmu155452024/CMU15443-2024/test/buffer/buffer_pool_manager_test.cpp:350 #10 0x000055555560703b in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fffec000ed0, method=&virtual testing::Test::TestBody(), location=0x55555564cb63 "the test body") at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:2612 #11 0x00005555555ea90a in testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fffec000ed0, method=&virtual testing::Test::TestBody(), location=0x55555564cb63 "the test body") at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:2648 #12 0x00005555555c7243 in testing::Test::Run (this=0x7fffec000ed0) at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:2687 #13 0x00005555555c7ef0 in testing::TestInfo::Run (this=0x5555556c0350) at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:2836 #14 0x00005555555c87dc in testing::TestSuite::Run (this=0x5555556bf9d0) at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:3015 #15 0x00005555555db3e4 in testing::internal::UnitTestImpl::RunAllTests (this=0x5555556bf5b0) at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:5920 #16 0x000055555560b5bb in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (object=0x5555556bf5b0, method=(bool (testing::internal::UnitTestImpl::*)(class testing::internal::UnitTestImpl * const)) 0x5555555daf80 <testing::internal::UnitTestImpl::RunAllTests()>, location=0x55555564d408 "auxiliary test code (environments or event listeners)") at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:2612 #17 0x00005555555ed05a in testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (object=0x5555556bf5b0, method=(bool (testing::internal::UnitTestImpl::*)(class testing::internal::UnitTestImpl * const)) 0x5555555daf80 <testing::internal::UnitTestImpl::RunAllTests()>, location=0x55555564d408 "auxiliary test code (environments or event listeners)") at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:2648 #18 0x00005555555daf2b in testing::UnitTest::Run (this=0x5555556ace08 <testing::UnitTest::GetInstance()::instance>) at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/src/gtest.cc:5484 #19 0x000055555560ecc1 in RUN_ALL_TESTS () at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googletest/include/gtest/gtest.h:2317 #20 0x000055555560ec9b in main (argc=1, argv=0x7fffffffdaf8) at /home/relic/cmu155452024/CMU15443-2024/third_party/googletest/googlemock/src/gmock_main.cc:71
09-29
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值