Chapter2 Managing threads

本文深入探讨C++标准库中的线程管理基础知识,包括线程启动、等待线程完成及后台运行等操作。介绍了如何在线程启动时传递参数、线程所有权转移的方法以及根据运行时情况选择线程数量的技术。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


2.1 Basic thread management

2.1.1 Launching a thread

void do_some_work();
std::thread my_thread(do_some_work);

class background_task
{
public:
    void operator()() const
    {
        do_something();
        do_something_else();
    }
};
background_task f;
std::thread my_thread(f);

std::thread my_thread(background_task());

上面这段代码会引起歧义,由于backgroud_task() 引起。如果未明确指定函数名称,而是由这种暂时的代码代替函数名称,则可能引起歧义,造成my_thread是函数,该函数接收一个函数指针作为参数。解决方案如下:

std::thread my_thread((background_task()));       
std::thread my_thread{background_task()};    

Note that you only have to make this decision before the  std::thread object is destroyed—the thread itself may well have finished long before you join with it or detach it, and if you detach it, then the thread may continue running long after the  std::thread object is destroyed.

If you don’t wait for your thread to finish, then you need to ensure that the data accessed  by  the  thread  is  valid  until  the  thread  has  finished  with  it. 

struct func
{
    int& i;
    func(int& i_):i(i_){}
    void operator()()
    {
        for(unsigned j=0;j<1000000;++j)
        {
            do_something(i);               
        }
    }
};
void oops()
{
    int some_local_state=0;
    func my_func(some_local_state);
    std::thread my_thread(my_func);
    my_thread.detach();               
}                   

上面这段代码有问题。i这个变量引用了some_local_state,并且my_thread在创建之后detach了,可能造成i成为dangling reference.

2.1.2 Waiting for a thread to complete

If you need to wait for a thread to complete, you can do this by calling join() on the associated std::thread  instance. join()  is  simple  and  brute  force—either  you  wait  for  a  thread  to  finish  or  you

don’t. 

2.1.3 Waiting in exceptional circumstances

struct func;                     
void f()
{
    int some_local_state=0;
    func my_func(some_local_state);
    std::thread t(my_func);
    try
    {
        do_something_in_current_thread();
    }
    catch(...)
    {
        t.join();
        throw;
    }
    t.join();           
}

上面这段代码的确能够解决exception问题,但是没必要,请看如下代码:

class thread_guard
{
    std::thread& t;
public:
    explicit thread_guard(std::thread& t_):
        t(t_)
    {}
    ~thread_guard()
    {
        if(t.joinable())         
        {
            t.join();            
        }
    }
    thread_guard(thread_guard const&)=delete;             
    thread_guard& operator=(thread_guard const&)=delete;
};
struct func;                          
void f()
{
    int some_local_state=0;
    func my_func(some_local_state);
    std::thread t(my_func);
    thread_guard g(t);
    do_something_in_current_thread();
} 
One way of doing this is to use the standard Resource Acquisition Is Initialization (RAII) idiom and provide a class that does the  join() in its destructor.

2.1.4 Running threads in the background

Calling detach()   on  a  std::thread  object  leaves  the  thread  to  run  in  the  background, with no direct means of communicating with it. It’s no longer possible to wait for that thread to complete; if a thread becomes detached, it isn’t possible to obtain a std::thread object that references it, so it can no longer be joined. Detached threads truly run in the background; ownership and control are passed over to the C++ Runtime Library, which ensures that the resources associated with the thread are correctly reclaimed when the thread exits.

void edit_document(std::string const& filename)
{
    open_document_and_display_gui(filename);
    while(!done_editing())
    {
        user_command cmd=get_user_input();
        if(cmd.type==open_new_document)
        {
            std::string const new_name=get_filename_from_user();
            std::thread t(edit_document,new_name);                
            t.detach();                              
        }
        else
        {
            process_user_input(cmd);
        }
    }
}

上面这段代码是word中在编辑文件的时候,根据接收用户不同的输入进行处理,如果是新建文件,则启动一个新的线程进行处理。


2.2 Passing arguments to a thread function

void f(int i,std::string const& s);
std::thread t(f,3,”hello”);
传参的基本方法。
void f(int i,std::string const& s);
void oops(int some_param)
{
    char buffer[1024];                 
    sprintf(buffer, "%i",some_param);
    std::thread t(f,3,buffer);         
    t.detach();
}

上面这段代码会出问题,如果oops在线程t之前结束,就造成了buffer指向的数据无效了!

void f(int i,std::string const& s);
void not_oops(int some_param)
{
    char buffer[1024];
    sprintf(buffer,"%i",some_param);
    std::thread t(f,3,std::string(buffer));    
    t.detach();
}

可以通过上述这种方式进行解决,但是造价个人感觉还是蛮大的。

void update_data_for_widget(widget_id w,widget_data& data);    
void oops_again(widget_id w)
{
    widget_data data;
    std::thread t(update_data_for_widget,w,data);       
    display_status();
    t.join();
    process_widget_data(data);         
}

上面这种方式是按值传递,但是我们有的时候想按引用传递,如下的方式是按引用传递:

void update_data_for_widget(widget_id w,widget_data& data);    
void oops_again(widget_id w)
{
    widget_data data;
    std::thread t(update_data_for_widget,w,std::ref(data));    
    display_status();
    t.join();
    process_widget_data(data);         
}

注意使用了std::ref(data).

An  example  of  such  a  type  is std::unique_ptr ,  which  provides  automatic  memory  management  for  dynamically allocated objects. Only one std::unique_ptr  instance can point to a given object at a time, and when that instance is destroyed, the pointed-to object is deleted. The move constructor  and  move  assignment  operator  allow  the  ownership  of  an  object  to  be  transferred around between std::unique_ptr  instances.

void process_big_object(std::unique_ptr<big_object>);
std::unique_ptr<big_object> p(new big_object);
p->prepare_data(42);
std::thread t(process_big_object,std::move(p));

2.3 Transferring ownership of a thread

void some_function();
void some_other_function();
std::thread t1(some_function);         
std::thread t2=std::move(t1);                
t1=std::thread(some_other_function);   
std::thread t3;                              
t3=std::move(t2);                      
t1=std::move(t3);

第4行将t1线程的拥有权转移给t2,然后t1重新拥有了对some_other_function的控制。第7行将t2的拥有权转移给了t3,最后一行将t3的拥有权转移给了t1,由于现在t1同时拥有了对两个不同函数的拥有权,因此造成t1终止。

std::thread f()
{
    void some_function();
    return std::thread(some_function);
}
std::thread g()
{
    void some_other_function(int);
    std::thread t(some_other_function,42);
    return t;
}

能够返回一个线程,并且赋值给其他的线程,完成线程拥有权的转移。

class scoped_thread
{
    std::thread t;
public:
    explicit scoped_thread(std::thread t_):        
        t(std::move(t_))
    {
        if(!t.joinable())                          
            throw std::logic_error(“No thread”);
    }
    ~scoped_thread()
    {
        t.join();       
    }
    scoped_thread(scoped_thread const&)=delete;
    scoped_thread& operator=(scoped_thread const&)=delete;
};
struct func;                  
void f()
{
    int some_local_state;
    scoped_thread t(std::thread(func(some_local_state)));   
    do_something_in_current_thread();
}       

上面这段代码通过move进行了线程拥有权的转移!

void do_work(unsigned id);
void f()
{
    std::vector<std::thread> threads;
    for(unsigned i=0;i<20;++i)
    {
        threads.push_back(std::thread(do_work,i));   
    }
    std::for_each(threads.begin(),threads.end(),
                  std::mem_fn(&s

上面这段代码启动一批线程,并且对他们进行集体管理!

2.4 Choosing the number of threads at runtime

One feature of the C++ Standard Library that helps here is  std::thread::hardware_concurrency() . This function returns an indication of the number of threads that can truly run concurrently for a given execution of a program. 

template<typename Iterator,typename T>
struct accumulate_block
{
    void operator()(Iterator first,Iterator last,T& result)
    {
        result=std::accumulate(first,last,result);
    }
};
template<typename Iterator,typename T>
T parallel_accumulate(Iterator first,Iterator last,T init)
{
    unsigned long const length=std::distance(first,last);
    if(!length)                                            
        return init;
    unsigned long const min_per_thread=25;
    unsigned long const max_threads=
        (length+min_per_thread-1)/min_per_thread;    
    unsigned long const hardware_threads=
        std::thread::hardware_concurrency();
    unsigned long const num_threads=            
        std::min(hardware_threads!=0?hardware_threads:2,max_threads);
    unsigned long const block_size=length/num_threads;      
    std::vector<T> results(num_threads);
    std::vector<std::thread>  threads(num_threads-1);       
    Iterator block_start=first;
    for(unsigned long i=0;i<(num_threads-1);++i)
    {
        Iterator block_end=block_start;
        std::advance(block_end,block_size);                 
        threads[i]=std::thread(                 
            accumulate_block<Iterator,T>(),
            block_start,block_end,std::ref(results[i]));
        block_start=block_end;                              
    }
    accumulate_block<Iterator,T>()(
        block_start,last,results[num_threads-1]); 
    std::for_each(threads.begin(),threads.end(),
        std::mem_fn(&std::thread::join));             
    return std::accumulate(results.begin(),results.end(),init);   
}
上面这段程序进行求和运算,根据数据范围以及机器能够支持的线程数量,进行创建合适的线程数目。

2.5 Identifying threads
Thread  identifiers  are  of  type  std::thread::id   and  can  be  retrieved  in  two  ways. First,  the  identifier  for  a  thread  can  be  obtained  from  its  associated std::thread object  by  calling  the get_id()   member  function.  If  the  std::thread  object  doesn’t have  an  associated  thread  of  execution,  the  call  to get_id()   returns  a  default-constructed std::thread::id  object, which indicates “not any thread.” Alternatively, the identifier for the current thread can be obtained by calling  std::this_thread::
get_id() , which is also defined in the  <thread>  header.

2.6 Summary

In  this  chapter  I  covered  the  basics  of  thread  management  with  the  C++  Standard Library: starting threads, waiting for them to finish, and not waiting for them to finish because you want them to run in the background. You also saw how to pass arguments into the thread function when a thread is started, how to transfer the responsibility for managing a thread from one part of the code to another, and how groups of threads can be used to divide work. Finally, I discussed identifying threads in order to associate  data  or  behavior  with  specific  threads  that’s  inconvenient  to  associate  through alternative means. Although you can do quite a lot with purely independent threads that each operate on separate data, as in listing 2.8 for example, sometimes it’s desirable to share data among threads while they’re running. Chapter 3 discusses the issues surrounding  sharing  data  directly  among  threads,  while  chapter  4  covers  more  general issues surrounding synchronizing operations with and without shared data.

Chapter 4: Processor Architecture. This chapter covers basic combinational and sequential logic elements, and then shows how these elements can be combined in a datapath that executes a simplified subset of the x86-64 instruction set called “Y86-64.” We begin with the design of a single-cycle datapath. This design is conceptually very simple, but it would not be very fast. We then introduce pipelining, where the different steps required to process an instruction are implemented as separate stages. At any given time, each stage can work on a different instruction. Our five-stage processor pipeline is much more realistic. The control logic for the processor designs is described using a simple hardware description language called HCL. Hardware designs written in HCL can be compiled and linked into simulators provided with the textbook, and they can be used to generate Verilog descriptions suitable for synthesis into working hardware. Chapter 5: Optimizing Program Performance. This chapter introduces a number of techniques for improving code performance, with the idea being that programmers learn to write their C code in such a way that a compiler can then generate efficient machine code. We start with transformations that reduce the work to be done by a program and hence should be standard practice when writing any program for any machine. We then progress to transformations that enhance the degree of instruction-level parallelism in the generated machine code, thereby improving their performance on modern “superscalar” processors. To motivate these transformations, we introduce a simple operational model of how modern out-of-order processors work, and show how to measure the potential performance of a program in terms of the critical paths through a graphical representation of a program. You will be surprised how much you can speed up a program by simple transformations of the C code. Bryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiii (front) Windfall Software, PCA ZzTEX 16.2 xxiv Preface Chapter 6: The Memory Hierarchy. The memory system is one of the most visible parts of a computer system to application programmers. To this point, you have relied on a conceptual model of the memory system as a linear array with uniform access times. In practice, a memory system is a hierarchy of storage devices with different capacities, costs, and access times. We cover the different types of RAM and ROM memories and the geometry and organization of magnetic-disk and solid state drives. We describe how these storage devices are arranged in a hierarchy. We show how this hierarchy is made possible by locality of reference. We make these ideas concrete by introducing a unique view of a memory system as a “memory mountain” with ridges of temporal locality and slopes of spatial locality. Finally, we show you how to improve the performance of application programs by improving their temporal and spatial locality. Chapter 7: Linking. This chapter covers both static and dynamic linking, including the ideas of relocatable and executable object files, symbol resolution, relocation, static libraries, shared object libraries, position-independent code, and library interpositioning. Linking is not covered in most systems texts, but we cover it for two reasons. First, some of the most confusing errors that programmers can encounter are related to glitches during linking, especially for large software packages. Second, the object files produced by linkers are tied to concepts such as loading, virtual memory, and memory mapping. Chapter 8: Exceptional Control Flow. In this part of the presentation, we step beyond the single-program model by introducing the general concept of exceptional control flow (i.e., changes in control flow that are outside the normal branches and procedure calls). We cover examples of exceptional control flow that exist at all levels of the system, from low-level hardware exceptions and interrupts, to context switches between concurrent processes, to abrupt changes in control flow caused by the receipt of Linux signals, to the nonlocal jumps in C that break the stack discipline. This is the part of the book where we introduce the fundamental idea of a process, an abstraction of an executing program. You will learn how processes work and how they can be created and manipulated from application programs. We show how application programmers can make use of multiple processes via Linux system calls. When you finish this chapter, you will be able to write a simple Linux shell with job control. It is also your first introduction to the nondeterministic behavior that arises with concurrent program execution. Chapter 9: Virtual Memory. Our presentation of the virtual memory system seeks to give some understanding of how it works and its characteristics. We want you to know how it is that the different simultaneous processes can each use an identical range of addresses, sharing some pages but having individual copies of others. We also cover issues involved in managing and manipulating virtual memory. In particular, we cover the operation of storage allocators such as the standard-library malloc and free operations. CovBryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiv (front) Windfall Software, PCA ZzTEX 16.2 Preface xxv ering this material serves several purposes. It reinforces the concept that the virtual memory space is just an array of bytes that the program can subdivide into different storage units. It helps you understand the effects of programs containing memory referencing errors such as storage leaks and invalid pointer references. Finally, many application programmers write their own storage allocators optimized toward the needs and characteristics of the application. This chapter, more than any other, demonstrates the benefit of covering both the hardware and the software aspects of computer systems in a unified way. Traditional computer architecture and operating systems texts present only part of the virtual memory story. Chapter 10: System-Level I/O. We cover the basic concepts of Unix I/O such as files and descriptors. We describe how files are shared, how I/O redirection works, and how to access file metadata. We also develop a robust buffered I/O package that deals correctly with a curious behavior known as short counts, where the library function reads only part of the input data. We cover the C standard I/O library and its relationship to Linux I/O, focusing on limitations of standard I/O that make it unsuitable for network programming. In general, the topics covered in this chapter are building blocks for the next two chapters on network and concurrent programming. Chapter 11: Network Programming. Networks are interesting I/O devices to program, tying together many of the ideas that we study earlier in the text, such as processes, signals, byte ordering, memory mapping, and dynamic storage allocation. Network programs also provide a compelling context for concurrency, which is the topic of the next chapter. This chapter is a thin slice through network programming that gets you to the point where you can write a simple Web server. We cover the client-server model that underlies all network applications. We present a programmer’s view of the Internet and show how to write Internet clients and servers using the sockets interface. Finally, we introduce HTTP and develop a simple iterative Web server. Chapter 12: Concurrent Programming. This chapter introduces concurrent programming using Internet server design as the running motivational example. We compare and contrast the three basic mechanisms for writing concurrent programs—processes, I/O multiplexing, and threads—and show how to use them to build concurrent Internet servers. We cover basic principles of synchronization using P and V semaphore operations, thread safety and reentrancy, race conditions, and deadlocks. Writing concurrent code is essential for most server applications. We also describe the use of thread-level programming to express parallelism in an application program, enabling faster execution on multi-core processors. Getting all of the cores working on a single computational problem requires a careful coordination of the concurrent threads, both for correctness and to achieve high performance翻译以上英文为中文
08-05
内容概要:该研究通过在黑龙江省某示范村进行24小时实地测试,比较了燃煤炉具与自动/手动进料生物质炉具的污染物排放特征。结果显示,生物质炉具相比燃煤炉具显著降低了PM2.5、CO和SO2的排放(自动进料分别降低41.2%、54.3%、40.0%;手动进料降低35.3%、22.1%、20.0%),但NOx排放未降低甚至有所增加。研究还发现,经济性和便利性是影响生物质炉具推广的重要因素。该研究不仅提供了实际排放数据支持,还通过Python代码详细复现了排放特征比较、减排效果计算和结果可视化,进一步探讨了燃料性质、动态排放特征、碳平衡计算以及政策建议。 适合人群:从事环境科学研究的学者、政府环保部门工作人员、能源政策制定者、关注农村能源转型的社会人士。 使用场景及目标:①评估生物质炉具在农村地区的推广潜力;②为政策制定者提供科学依据,优化补贴政策;③帮助研究人员深入了解生物质炉具的排放特征和技术改进方向;④为企业研发更高效的生物质炉具提供参考。 其他说明:该研究通过大量数据分析和模拟,揭示了生物质炉具在实际应用中的优点和挑战,特别是NOx排放增加的问题。研究还提出了多项具体的技术改进方向和政策建议,如优化进料方式、提高热效率、建设本地颗粒厂等,为生物质炉具的广泛推广提供了可行路径。此外,研究还开发了一个智能政策建议生成系统,可以根据不同地区的特征定制化生成政策建议,为农村能源转型提供了有力支持。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值