Chapter 3. Process Management

本文聚焦进程管理,介绍了进程描述符和任务结构,包括其分配、存储方式。还阐述了进程的多种状态,如TASK_RUNNING、TASK_INTERRUPTIBLE等,以及各状态的特点和转换条件,同时提及了进程上下文和进程家族树等内容。

Chapter 3. Process Management

The process is one of the fundamental abstractions in Unix operating systems [1]. A process is a program (object code stored on some media) in execution. Processes are, however, more than just the executing program code (often called the text section in Unix). They also include a set of resources such as open files and pending signals, internal kernel data, processor state, an address space, one or more threads of execution, and a data section containing global variables. Processes, in effect, are the living result of running program code.
[1] The other fundamental abstraction is files.
Threads of execution, often shortened to threads, are the objects of activity within the process. Each thread includes a unique program counter, process stack, and set of processor registers. The kernel schedules individual threads, not processes. In traditional Unix systems, each process consists of one thread. In modern systems, however, multithreaded programsthose that consist of more than one threadare common. As you will see later, Linux has a unique implementation of threads: It does not differentiate between threads and processes. To Linux, a thread is just a special kind of process.
On modern operating systems, processes provide two virtualizations: a virtualized processor and virtual memory. The virtual processor gives the process the illusion that it alone monopolizes the system, despite possibly sharing the processor among dozens of other processes. Chapter 4, "Process Scheduling," discusses this virtualization. Virtual memory lets the process allocate and manage memory as if it alone owned all the memory in the system. Virtual memory is covered in Chapter 11, "Memory Management." Interestingly, note that threads share the virtual memory abstraction while each receives its own virtualized processor.
A program itself is not a process; a process is an active program and related resources. Indeed, two or more processes can exist that are executing the same program. In fact, two or more processes can exist that share various resources, such as open files or an address space.
A process begins its life when, not surprisingly, it is created. In Linux, this occurs by means of the fork() system call, which creates a new process by duplicating an existing one. The process that calls fork() is the parent, whereas the new process is the child. The parent resumes execution and the child starts execution at the same place, where the call returns. The fork() system call returns from the kernel twice: once in the parent process and again in the newborn child.
Often, immediately after a fork it is desirable to execute a new, different, program. The exec * () family of function calls is used to create a new address space and load a new program into it. In modern Linux kernels, fork() is actually implemented via the clone() system call, which is discussed in a following section.
Finally, a program exits via the exit() system call. This function terminates the process and frees all its resources. A parent process can inquire about the status of a terminated child via the wait4() [2] system call, which enables a process to wait for the termination of a specific process. When a process exits, it is placed into a special zombie state that is used to represent terminated processes until the parent calls wait() or waitpid().
[2] The kernel implements the wait4() system call. Linux systems, via the C library, typically provide the wait(), waitpid(), wait3() , and wait4() functions. All these functions return status about a terminated process, albeit with slightly different semantics.
Another name for a process is a task. The Linux kernel internally refers to processes as tasks. In this book, I will use the terms interchangeably, although when I say task I am generally referring to a process from the kernel's point of view.

Process Descriptor and the Task Structure

The kernel stores the list of processes in a circular doubly linked list called the task list [3]. Each element in the task list is a process descriptor of the type struct task_struct, which is defined in <linux/sched.h>. The process descriptor contains all the information about a specific process.
[3] Some texts on operating system design call this list the task array. Because the Linux implementation is a linked list and not a static array, it is called the task list.
The task_struct is a relatively large data structure, at around 1.7 kilobytes on a 32-bit machine. This size, however, is quite small considering that the structure contains all the information that the kernel has and needs about a process. The process descriptor contains the data that describes the executing programopen files, the process's address space, pending signals, the process's state, and much more (see Figure 3.1).

Figure 3.1. The process descriptor and task list.

Allocating the Process Descriptor
The task_struct structure is allocated via the slab allocator to provide object reuse and cache coloring (see Chapter 11, "Memory Management"). Prior to the 2.6 kernel series, struct task_struct was stored at the end of the kernel stack of each process. This allowed architectures with few registers, such as x86, to calculate the location of the process descriptor via the stack pointer without using an extra register to store the location. With the process descriptor now dynamically created via the slab allocator, a new structure, struct thread_info, was created that again lives at the bottom of the stack (for stacks that grow down) and at the top of the stack (for stacks that grow up) [4]. See Figure 3.2. The new structure also makes it rather easy to calculate offsets of its values for use in assembly code.
[4] Register-impaired architectures were not the only reason for creating struct thread_info.

Figure 3.2. The process descriptor and kernel stack.

The thread_info structure is defined on x86 in <asm/thread_info.h> as
struct thread_info {
        struct task_struct    *task;
        struct exec_domain    *exec_domain;
        unsigned long         flags;
        unsigned long         status;
        __u32                 cpu;
        __s32                 preempt_count;
        mm_segment_t          addr_limit;
        struct restart_block  restart_block;
        unsigned long         previous_esp;
        __u8                  supervisor_stack[0];
};

Each task's tHRead_info structure is allocated at the end of its stack. The task element of the structure is a pointer to the task's actual task_struct.
Storing the Process Descriptor
The system identifies processes by a unique process identification value or PID. The PID is a numerical value that is represented by the opaque type [5] pid_t, which is typically an int. Because of backward compatibility with earlier Unix and Linux versions, however, the default maximum value is only 32,768 (that of a short int), although the value can optionally be increased to the full range afforded the type. The kernel stores this value as pid inside each process descriptor.
[5] An opaque type is a data type whose physical representation is unknown or irrelevant.
This maximum value is important because it is essentially the maximum number of processes that may exist concurrently on the system. Although 32,768 might be sufficient for a desktop system, large servers may require many more processes. The lower the value, the sooner the values will wrap around, destroying the useful notion that higher values indicate later run processes than lower values. If the system is willing to break compatibility with old applications, the administrator may increase the maximum value via /proc/sys/kernel/pid_max.
Inside the kernel, tasks are typically referenced directly by a pointer to their task_struct structure. In fact, most kernel code that deals with processes works directly with struct task_struct. Consequently, it is very useful to be able to quickly look up the process descriptor of the currently executing task, which is done via the current macro. This macro must be separately implemented by each architecture. Some architectures save a pointer to the task_struct structure of the currently running process in a register, allowing for efficient access. Other architectures, such as x86 (which has few registers to waste), make use of the fact that struct thread_info is stored on the kernel stack to calculate the location of thread_info and subsequently the task_struct.
On x86, current is calculated by masking out the 13 least significant bits of the stack pointer to obtain the thread_info structure. This is done by the current_thread_info() function. The assembly is shown here:
movl $-8192, %eax
andl %esp, %eax

This assumes that the stack size is 8KB. When 4KB stacks are enabled, 4096 is used in lieu of 8192.
Finally, current dereferences the task member of thread_info to return the task_struct:
current_thread_info()->task;

Contrast this approach with that taken by PowerPC (IBM's modern RISC-based microprocessor), which stores the current task_struct in a register. Thus, current on PPC merely returns the value stored in the register r2. PPC can take this approach because, unlike x86, it has plenty of registers. Because accessing the process descriptor is a common and important job, the PPC kernel developers deem using a register worthy for the task.
Process State
The state field of the process descriptor describes the current condition of the process (see Figure 3.3). Each process on the system is in exactly one of five different states. This value is represented by one of five flags:
  • TASK_RUNNING The process is runnable; it is either currently running or on a runqueue waiting to run (runqueues are discussed in Chapter 4, " Scheduling"). This is the only possible state for a process executing in user-space; it can also apply to a process in kernel-space that is actively running.
  • TASK_INTERRUPTIBLE The process is sleeping (that is, it is blocked), waiting for some condition to exist. When this condition exists, the kernel sets the process's state to TASK_RUNNING. The process also awakes prematurely and becomes runnable if it receives a signal.
  • TASK_UNINTERRUPTIBLE This state is identical to TASK_INTERRUPTIBLE except that it does not wake up and become runnable if it receives a signal. This is used in situations where the process must wait without interruption or when the event is expected to occur quite quickly. Because the task does not respond to signals in this state, TASK_UNINTERRUPTIBLE is less often used than TASK_INTERRUPTIBLE[6].

    [6] This is why you have those dreaded unkillable processes with state D in ps(1). Because the task will not respond to signals, you cannot send it a SIGKILL signal. Further, even if you could terminate the task, it would not be wise as the task is supposedly in the middle of an important operation and may hold a semaphore.
  • TASK_ZOMBIE The task has terminated, but its parent has not yet issued a wait4() system call. The task's process descriptor must remain in case the parent wants to access it. If the parent calls wait4(), the process descriptor is deallocated.
  • TASK_STOPPED Process execution has stopped; the task is not running nor is it eligible to run. This occurs if the task receives the SIGSTOP, SIGTSTP, SIGTTIN, or SIGTTOU signal or if it receives any signal while it is being debugged.

Figure 3.3. Flow chart of process states.

Manipulating the Current Process State
Kernel code often needs to change a process's state. The preferred mechanism is using
set_task_state(task, state);        /* set task 'task' to state 'state' */

This function sets the given task to the given state. If applicable, it also provides a memory barrier to force ordering on other processors (this is only needed on SMP systems). Otherwise, it is equivalent to
task->state = state;

The method set_current_state(state) is synonymous to set_task_state(current, state).
Process Context
One of the most important parts of a process is the executing program code. This code is read in from an executable file and executed within the program's address space. Normal program execution occurs in user-space. When a program executes a system call (see Chapter 5, "System Calls") or triggers an exception, it enters kernel-space. At this point, the kernel is said to be "executing on behalf of the process" and is in process context. When in process context, the current macro is valid [7]. Upon exiting the kernel, the process resumes execution in user-space, unless a higher-priority process has become runnable in the interim, in which case the scheduler is invoked to select the higher priority process.
[7] Other than process context there is interrupt context, which we discuss in Chapter 6, "Interrupts and Interrupt Handlers." In interrupt context, the system is not running on behalf of a process, but is executing an interrupt handler. There is no process tied to interrupt handlers and consequently no process context.
System calls and exception handlers are well-defined interfaces into the kernel. A process can begin executing in kernel-space only through one of these interfacesall access to the kernel is through these interfaces.
The Process Family Tree
A distinct hierarchy exists between processes in Unix systems, and Linux is no exception. All processes are descendents of the init process, whose PID is one. The kernel starts init in the last step of the boot process. The init process, in turn, reads the system initscripts and executes more programs, eventually completing the boot process.
Every process on the system has exactly one parent. Likewise, every process has zero or more children. Processes that are all direct children of the same parent are called siblings. The relationship between processes is stored in the process descriptor. Each task_struct has a pointer to the parent's task_struct, named parent, and a list of children, named children. Consequently, given the current process, it is possible to obtain the process descriptor of its parent with the following code:
struct task_struct *my_parent = current->parent;

Similarly, it is possible to iterate over a process's children with
struct task_struct *task;
struct list_head *list;

list_for_each(list, &current->children) {
        task = list_entry(list, struct task_struct, sibling);
        /* task now points to one of current's children */
}

The init task's process descriptor is statically allocated as init_task. A good example of the relationship between all processes is the fact that this code will always succeed:
struct task_struct *task;

for (task = current; task != &init_task; task = task->parent)
        ;
/* task now points to init */

In fact, you can follow the process hierarchy from any one process in the system to any other. Oftentimes, however, it is desirable simply to iterate over all processes in the system. This is easy because the task list is a circular doubly linked list. To obtain the next task in the list, given any valid task, use:
list_entry(task->tasks.next, struct task_struct, tasks)

Obtaining the previous works the same way:
list_entry(task->tasks.prev, struct task_struct, tasks)

These two routines are provided by the macros next_task(task) and prev_task(task), respectively. Finally, the macro for_each_process(task) is provided, which iterates over the entire task list. On each iteration, task points to the next task in the list:
struct task_struct *task;

for_each_process(task) {
        /* this pointlessly prints the name and PID of each task */
        printk("%s[%d]/n", task->comm, task->pid);
}

Note: It can be expensive to iterate over every task in a system with many processes; code should have good reason (and no alternative) before doing so.

Process Creation

Process creation in Unix is unique. Most operating systems implement a spawn mechanism to create a new process in a new address space, read in an executable, and begin executing it. Unix takes the unusual approach of separating these steps into two distinct functions: fork() and exec() [8] . The first, fork(), creates a child process that is a copy of the current task. It differs from the parent only in its PID (which is unique), its PPID (parent's PID, which is set to the original process), and certain resources and statistics, such as pending signals, which are not inherited. The second function, exec(), loads a new executable into the address space and begins executing it. The combination of fork() followed by exec() is similar to the single function most operating systems provide.. The first, fork(), creates a child process that is a copy of the current task. It differs from the parent only in its PID (which is unique), its PPID (parent's PID, which is set to the original process), and certain resources and statistics, such as pending signals, which are not inherited. The second function, exec(), loads a new executable into the address space and begins executing it. The combination of fork() followed by exec() is similar to the single function most operating systems provide.
[8] By exec() I mean any member of the exec() family of functions. The kernel implements the execve() system call on top of which execlp(),execle(),execv() , and execvp() are implemented. By exec() I mean any member of the exec() family of functions. The kernel implements the execve() system call on top of which execlp(), execle(), execv() , and execvp() are implemented.
[8] . The first, fork(), creates a child process that is a copy of the current task. It differs from the parent only in its PID (which is unique), its PPID (parent's PID, which is set to the original process), and certain resources and statistics, such as pending signals, which are not inherited. The second function, exec(), loads a new executable into the address space and begins executing it. The combination of fork() followed by exec() is similar to the single function most operating systems provide.
[8] By exec() I mean any member of the exec() family of functions. The kernel implements the execve() system call on top of which execlp(),execle(),execv() , and execvp() are implemented. By exec() I mean any member of the exec() family of functions. The kernel implements the execve() system call on top of which execlp(), execle(), execv() , and execvp() are implemented.
[8] By exec() I mean any member of the exec() family of functions. The kernel implements the execve() system call on top of which execlp(),execle(),execv() , and execvp() are implemented.
Copy-on-Write
Traditionally, upon fork() all resources owned by the parent are duplicated and the copy is given to the child. This approach is significantly naïve and inefficient in that it copies much data that might otherwise be shared. Worse still, if the new process were to immediately execute a new image, all that copying would go to waste. In Linux, fork() is implemented through the use of copy-on-write pages. Copy-on-write (or COW) is a technique to delay or altogether prevent copying of the data. Rather than duplicate the process address space, the parent and the child can share a single copy. The data, however, is marked in such a way that if it is written to, a duplicate is made and each process receives a unique copy. Consequently, the duplication of resources occurs only when they are written; until then, they are shared read-only. This technique delays the copying of each page in the address space until it is actually written to. In the case that the pages are never writtenfor example, if exec() is called immediately after fork()they never need to be copied. The only overhead incurred by fork() is the duplication of the parent's page tables and the creation of a unique process descriptor for the child. In the common case that a process executes a new executable image immediately after forking, this optimization prevents the wasted copying of large amounts of data (with the address space, easily tens of megabytes). This is an important optimization because the Unix philosophy encourages quick process execution.
fork()
Linux implements fork() via the clone() system call. This call takes a series of flags that specify which resources, if any, the parent and child process should share (see the section on " The Linux Implementation of Threads " later in this chapter for more about the flags). The fork(), vfork(), and __clone() library calls all invoke the clone() system call with the requisite flags. The clone() system call, in turn, calls do_fork().
The bulk of the work in forking is handled by do_fork(), which is defined in kernel/fork.c. This function calls copy_process(), and then starts the process running. The interesting work is done by copy_process():
【SCI复现】基于纳什博弈的多微网主体电热双层共享策略研究(Matlab代码实现)内容概要:本文围绕“基于纳什博弈的多微网主体电热双层共享策略研究”展开,结合Matlab代码实现,复现了SCI级别的科研成果。研究聚焦于多个微网主体之间的能源共享问题,引入纳什博弈理论构建双层优化模型,上层为各微网间的非合作博弈策略,下层为各微网内部电热联合优化调度,实现能源高效利用与经济性目标的平衡。文中详细阐述了模型构建、博弈均衡求解、约束处理及算法实现过程,并通过Matlab编程进行仿真验证,展示了多微网在电热耦合条件下的运行特性和共享效益。; 适合人群:具备一定电力系统、优化理论和博弈论基础知识的研究生、科研人员及从事能源互联网、微电网优化等相关领域的工程师。; 使用场景及目标:① 学习如何将纳什博弈应用于多主体能源系统优化;② 掌握双层优化模型的建模与求解方法;③ 复现SCI论文中的仿真案例,提升科研实践能力;④ 为微电网集群协同调度、能源共享机制设计提供技术参考。; 阅读建议:建议读者结合Matlab代码逐行理解模型实现细节,重点关注博弈均衡的求解过程与双层结构的迭代逻辑,同时可尝试修改参数或扩展模型以适应不同应用场景,深化对多主体协同优化机制的理解。
远程服务发布失败: java.rmi.server.ExportException: Port already in use: 1099; nested exception is: java.net.BindException: Address already in use: JVM_Bind at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:346) at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:254) at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:412) at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147) at sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:237) at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:213) at sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:198) at java.rmi.registry.LocateRegistry.createRegistry(LocateRegistry.java:203) at chapter12.server.RmiStudentServer.main(RmiStudentServer.java:20) Caused by: java.net.BindException: Address already in use: JVM_Bind at java.net.DualStackPlainSocketImpl.bind0(Native Method) at java.net.DualStackPlainSocketImpl.socketBind(DualStackPlainSocketImpl.java:102) at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:513) at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:180) at java.net.ServerSocket.bind(ServerSocket.java:375) at java.net.ServerSocket.<init>(ServerSocket.java:237) at java.net.ServerSocket.<init>(ServerSocket.java:128) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createServerSocket(RMIDirectSocketFactory.java:45) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createServerSocket(RMIMasterSocketFactory.java:345) at sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:670) at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335) ... 8 more Process finished with exit code 0 解释远程服务发布失败的原因
12-04
<think> 我们正在解释一个RMI服务启动时遇到的端口占用错误。错误信息明确指出1099端口已被占用,导致当前服务无法绑定到该端口。以下是详细的解释: ### 错误原因分析 1. **端口冲突**: 错误信息 `java.rmi.server.ExportException: Port already in use: 1099` 直接表明 **RMI注册表端口(默认1099)已被其他进程占用**。 - RMI服务启动时需要监听一个端口(默认1099)来接收远程调用请求。 - 当该端口已被占用时,JVM抛出 `BindException`,进而包装为 `ExportException`。 2. **嵌套异常分析**: `java.net.BindException: Address already in use: JVM_Bind` 是底层网络栈的报错,由操作系统触发[^1]。 - 此异常意味着 **操作系统级别** 的端口绑定失败。 --- ### 根本原因 - **常见场景**: 1. **重复启动**: 同一台机器上已经运行了一个RMI服务(或RMI注册表),且未关闭。 - 例如:开发中多次启动应用而未终止前一次进程。 2. **其他服务占用**: 非RMI服务(如数据库、Web服务器)意外占用了1099端口。 3. **僵尸进程**: 先前未正确终止的Java进程仍持有端口锁。 --- ### 解决方案 #### 方法1:终止占用端口的进程 1. **查找占用进程**: ```bash # Linux/macOS lsof -i :1099 # Windows netstat -ano | findstr :1099 ``` - 获取进程ID(PID)后强制终止: ```bash kill -9 <PID> # Linux/macOS taskkill /PID <PID> /F # Windows ``` #### 方法2:更换RMI服务端口 若无法终止原进程,可修改服务启动代码,指定新端口: ```java // 创建RMI注册表(指定端口2000,避开1099) LocateRegistry.createRegistry(2000); // 发布服务到自定义端口 Naming.rebind("rmi://localhost:2000/ServiceName", serviceObj); ``` #### 方法3:等待端口释放 若占用是临时性的(如测试环境),等待原进程自动退出后再启动。 --- ### 预防措施 1. **程序退出时注销服务**: 在`shutdown hook`中调用 `UnicastRemoteObject.unexportObject()` 释放端口[^2]。 2. **端口随机化**: 测试环境中使用 `0` 作为端口,让系统自动分配空闲端口: ```java LocateRegistry.createRegistry(0); // 随机端口 ``` --- ### 技术原理 - **TCP端口独占性**: 操作系统要求同一时刻一个端口只能被一个进程监听(即使不同进程的套接字设置了`SO_REUSEADDR`)[^3]。 - **RMI默认端口**: `1099` 是RMI注册表的标准端口,类似HTTP的80端口。若需复用端口,需使用负载均衡或代理。 §§相关问题§§ 1. 如何查看Linux系统所有被占用的端口? 2. RMI服务能否使用HTTP端口(80)? 3. 为什么Java进程终止后端口仍显示被占用? [^1]: 此异常源自操作系统网络栈,当`bind()`系统调用失败时由JVM转换抛出 [^2]: Java RMI规范要求显式注销远程对象以释放资源 [^3]: TCP协议规定端口绑定具有互斥性,参考UNIX网络编程卷1 </think>### 解释远程服务端口占用错误的原因 当您尝试发布 Java 远程服务(如 RMI)时遇到 `java.rmi.server.ExportException: Port already in use: 1099` 错误,根本原因是 **TCP 端口冲突**。以下是详细的技术分析: #### 核心原因 1. **端口独占性原则** TCP/IP 协议规定:同一时间**只能有一个进程**监听特定端口[^1]。 - `1099` 是 RMI 注册表的默认端口 - 当其他进程(可能是另一个 Java 应用、RMI 注册表或任何网络服务)已占用该端口时,新服务无法绑定 2. **操作系统级限制** 错误中的 `java.net.BindException: Address already in use: JVM_Bind` 表明: - 操作系统内核拒绝了 JVM 的端口绑定请求 - 底层 socket API 的 `bind()` 系统调用失败 #### 具体场景分析 | 场景类型 | 典型表现 | 发生概率 | |---------|---------|---------| | **重复启动服务** | 未关闭前一个服务进程就启动新实例 | ★★★★★ | | **其他服务占用** | 非 RMI 程序(如数据库、Web 服务)使用 1099 端口 | ★★★☆☆ | | **僵尸进程** | 先前未正确终止的 JVM 仍持有端口锁 | ★★☆☆☆ | | **容器/IDE 冲突** | Docker 容器或 IDE 调试会话未释放端口 | ★★★☆☆ | #### 技术验证方法 1. **检查端口占用** ```bash # Linux/macOS lsof -i :1099 # Windows netstat -ano | findstr :1099 ``` 输出示例: ``` COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 1234 user 46u IPv6 0xabcd... 0t0 TCP *:rmiregistry (LISTEN) ``` 2. **识别占用进程** - 根据 PID(示例中为 `1234`)定位进程: ```bash # Linux/macOS ps -p 1234 # Windows tasklist | findstr 1234 ``` #### 解决方案路径 ```mermaid graph TD A[端口占用错误] --> B{确定占用源} B -->|本服务未关闭| C[终止现有进程] B -->|其他服务占用| D[修改RMI端口] B -->|僵尸进程| E[强制释放端口] C --> F[重启服务] D --> F E --> F ``` #### 预防措施 1. **代码层面** ```java // 指定自定义端口(例如 1100) LocateRegistry.createRegistry(1100); ``` 2. **运维层面** - 使用端口检测脚本在启动前检查: ```bash #!/bin/bash if lsof -i :1099; then echo "端口被占用,尝试终止进程..." kill $(lsof -t -i :1099) fi ``` 3. **开发环境** - 在 IDE 运行配置中添加 JVM 参数: ``` -Dcom.sun.management.jmxremote.port=1100 -Dcom.sun.management.jmxremote.rmi.port=1100 ``` > **关键原则**:端口是操作系统管理的稀缺资源,RMI 服务启动时需确保目标端口处于 `LISTEN` 状态而非 `ESTABLISHED` 状态[^2]。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值