Linux源码解析-进程-进程

本文详细解析了Linux进程的内部结构,包括进程的状态、标识符、调度策略、内存管理等方面的内容,帮助读者深入了解进程的工作原理。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.进程的概念

以下摘自百度百科:

进程(Process)是计算机中的程序关于某数据集合上的一次运行活动,是系统进行资源分配和调度的基本单位,是操作系统结构的基础。在早期面向进程设计的计算机结构中,进程是程序的基本执行实体;在当代面向线程设计的计算机结构中,进程是线程的容器。程序是指令、数据及其组织形式的描述,进程是程序的实体

当然,上述只是宏观概念上的抽象描述,我们在本科学习操作系统时,一般老师最喜欢讲这种。那么,进程在操作系统中真正意义上是怎么表示的呢?


2.进程相关数据结构

我分析的源码版本:linux-2.6.34

linux内核通过task_struct(进程描述符)结构体来管理进程,这个结构体包含了一个进程所需的所有信息。它定义在linux-2.6.34\include\linux\sched.h文件中

(1)进程状态 (state)


关于进程状态,源码中的注释如下:

/*
 * Task state bitmask. NOTE! These bits are also
 * encoded in fs/proc/array.c: get_task_state().
 *
 * We have two separate sets of flags: task->state
 * is about runnability, while task->exit_state are
 * about the task exiting. Confusing, but this way
 * modifying one set can't modify the other one by
 * mistake.
 */

进程状态分为两类:struct_task中成员state(关于运行的状态)和exit_state(关于退出的状态)

#define TASK_RUNNING        0
#define TASK_INTERRUPTIBLE    1
#define TASK_UNINTERRUPTIBLE    2
#define __TASK_STOPPED        4
#define __TASK_TRACED        8
/* in tsk->exit_state */
#define EXIT_ZOMBIE        16
#define EXIT_DEAD        32
/* in tsk->state again */
#define TASK_DEAD        64
#define TASK_WAKEKILL        128
#define TASK_WAKING        256
#define TASK_STATE_MAX        512


TASK_RUNNING:表示进程要么正在执行,要么正要准备执行

处于这种状态的进程,要么正在运行、要么正准备运行。正在运行的进程就是当前进程(由current所指向的进程),而准备运行的进程只要得到CPU就可以立即投入运行,CPU是这些进程唯一等待的系统资源。系统中有一个运行队列(run_queue),用来容纳所有处于可运行状态的进程,调度程序执行时,从中选择一个进程投入运行。在后面我们讨论进程调度的时候,可以看到运行队列的作用。当前运行进程一直处于该队列中,也就是说,current总是指向运行队列中的某个元素,只是具体指向谁由调度程序决定。


TASK_INTERRUPTIBLE:E表示进程被阻塞(睡眠),直到某个条件变为真。条件一旦达成,进程的状态就被设置为TASK_RUNNING。

处于该状态的进程正在等待某个事件(event)或某个资源,它肯定位于系统中的某个等待队列(wait_queue)中。Linux中处于等待状态的进程分为两种:可中断的等待状态和不可中断的等待状态。处于可中断等待态的进程可以被信号唤醒,如果收到信号,该进程就从等待状态进入可运行状态,并且加入到运行队列中,等待被调度;而处于不可中断等待态的进程是因为硬件环境不能满足而等待,例如等待特定的系统资源,它任何情况下都不能被打断,只能用特定的方式来唤醒它,例如唤醒函数wake_up()等。

TASK_UNINTERRUPTIBLE:TASK_UNINTERRUPTIBLE的意义与TASK_INTERRUPTIBLE类似,除了不能通过接受一个信号来唤醒


__TASK_STOPPED:表示进程被停止执行

此时的进程暂时停止运行来接受某种特殊处理。通常当进程接收到SIGSTOP、SIGTSTP、SIGTTIN或 SIGTTOU信号后就处于这种状态。例如,正接受调试的进程就处于这种状态。

__TASK_TRACED:表示进程被debugger等进程监视


/* exit_state */

EXIT_ZOMBIE:表示进程的执行被终止,但是其父进程还没有使用wait()等系统调用来获知它的终止信息

进程虽然已经终止,但由于某种原因,父进程还没有执行wait()等系统调用,终止进程的信息也还没有回收。顾名思义,处于该状态的进程就是僵尸进程,这种进程实际上是系统中的垃圾,必须进行相应处理以释放其占用的资源。

EXIT_DEAD:表示进程的最终状态


(2)进程状态的转换图



(3)进程相关标识符


   pid_t pid;       //内核用以标识进程的id
   pid_t tgid;

 tgid是干什么的?主要是用来实现线程机制所用到的,在Linux系统中,一个线程组中的所有线程使用和该线程组的领头线程
(该组中的第一个轻量级进程)相同的PID,并被存放在tgid成员中。只有线程组的领头线程的pid成员才会被设置为与tgid相同的值。
注意,getpid()系统调用返回的是当前进程的tgid值而不是pid值。这样就可以保证同一线程组中所有线程getpid时都有相同的值



(4)内核栈

具体参见我的另一篇博文 http://blog.youkuaiyun.com/tiankong_/article/details/75647488
什么是进程的内核栈?


进程在内核态运行时需要自己的堆栈信息, 因此linux内核为每个进程都提供了一个内核栈kernel stack

void *stack;
task_struct数据结构中的stack成员指向thread_union结构(Linux内核通过thread_union联合体来表示进程的内核栈)


内核栈的大小?

进程通过alloc_thread_info函数分配它的内核栈,通过free_thread_info函数释放所分配的内核栈,查看源码

alloc_thread_info函数通过调用__get_free_pages函数分配2个页的内存(8192字节)


(5)表示进程关系的字段

        /*
	 * pointers to (original) parent process, youngest child, younger sibling,
	 * older sibling, respectively.  (p->father can be replaced with 
	 * p->real_parent->pid)
	 */
	struct task_struct *real_parent; /* real parent process */
	struct task_struct *parent; /* recipient of SIGCHLD, wait4() reports */
	/*
	 * children/sibling forms the list of my natural children
	 */
	struct list_head children;	/* list of my children */
	struct list_head sibling;	/* linkage in my parent's children list */
	struct task_struct *group_leader;	/* threadgroup leader */

real_parent:指向其父进程,如果创建它的父进程不再存在,则指向PID为1的init进程

parent:指向其父进程,当它终止时,必须向它的父进程发送信号。它的值通常与real_parent相同

children:指向子进程链表

sibling:指向兄弟链表

group_leader:指向其所在进程组的领头进程


(6)标志字段

unsigned int flags;

flags成员的可能取值如下:

    #define PF_KSOFTIRQD    0x00000001  /* I am ksoftirqd */  
    #define PF_STARTING 0x00000002  /* being created */  
    #define PF_EXITING  0x00000004  /* getting shut down */  
    #define PF_EXITPIDONE   0x00000008  /* pi exit done on shut down */  
    #define PF_VCPU     0x00000010  /* I'm a virtual CPU */  
    #define PF_WQ_WORKER    0x00000020  /* I'm a workqueue worker */  
    #define PF_FORKNOEXEC   0x00000040  /* forked but didn't exec ,进程刚创建,但还没执行*/  
    #define PF_MCE_PROCESS  0x00000080      /* process policy on mce errors */  
    #define PF_SUPERPRIV    0x00000100  /* used super-user privileges */   /*超级用户特权*/
    #define PF_DUMPCORE 0x00000200  /* dumped core */  
    #define PF_SIGNALED 0x00000400  /* killed by a signal */   /*进程被信号杀死*/
    #define PF_MEMALLOC 0x00000800  /* Allocating memory */  
    #define PF_USED_MATH    0x00002000  /* if unset the fpu must be initialized before use */  
    #define PF_FREEZING 0x00004000  /* freeze in progress. do not account to load */  
    #define PF_NOFREEZE 0x00008000  /* this thread should not be frozen */  
    #define PF_FROZEN   0x00010000  /* frozen for system suspend */  
    #define PF_FSTRANS  0x00020000  /* inside a filesystem transaction */  
    #define PF_KSWAPD   0x00040000  /* I am kswapd */  
    #define PF_OOM_ORIGIN   0x00080000  /* Allocating much memory to others */  
    #define PF_LESS_THROTTLE 0x00100000 /* Throttle me less: I clean memory */  
    #define PF_KTHREAD  0x00200000  /* I am a kernel thread */  
    #define PF_RANDOMIZE    0x00400000  /* randomize virtual address space */  
    #define PF_SWAPWRITE    0x00800000  /* Allowed to write to swap */  
    #define PF_SPREAD_PAGE  0x01000000  /* Spread page cache over cpuset */  
    #define PF_SPREAD_SLAB  0x02000000  /* Spread some slab caches over cpuset */  
    #define PF_THREAD_BOUND 0x04000000  /* Thread bound to specific cpu */  
    #define PF_MCE_EARLY    0x08000000      /* Early kill for mce process policy */  
    #define PF_MEMPOLICY    0x10000000  /* Non-default NUMA mempolicy */  
    #define PF_MUTEX_TESTER 0x20000000  /* Thread belongs to the rt mutex tester */  
    #define PF_FREEZER_SKIP 0x40000000  /* Freezer should not count it as freezable */  
    #define PF_FREEZER_NOSIG 0x80000000 /* Freezer won't send signals to it */  

(7)进程优先级字段

int prio, static_prio, normal_prio;
unsigned int rt_priority;
字段描述
static_prio用于保存静态优先级,可以通过nice系统调用来进行修改
rt_priority用于保存实时优先级
normal_prio值取决于静态优先级和调度策略
prio用于保存动态优先级
实时优先级范围是0到MAX_RT_PRIO-1(即99),而普通进程的静态优先级范围是从MAX_RT_PRIO到MAX_PRIO-1(即100到139)。值越大静态优先级越低。

(8)调度策略相关字段

unsigned int policy;       /*选择调度策略*/
const struct sched_class *sched_class;
struct sched_entity se;
struct sched_rt_entity rt;


policy:表示进程的调度策略,目前主要有以下五种

#define SCHED_NORMAL		0
#define SCHED_FIFO		1  /*操作系统中经常讲的,先入先出调度算法*/
#define SCHED_RR		2  /*轮流调度算法*/
#define SCHED_BATCH		3
/* SCHED_ISO: reserved but not implemented yet */
#define SCHED_IDLE		5
字段描述所在调度器类
SCHED_NORMAL(也叫SCHED_OTHER)用于普通进程,通过CFS调度器实现。SCHED_BATCH用于非交互的处理器消耗型进程CFS
SCHED_BATCHSCHED_NORMAL普通进程策略的分化版本。采用分时策略,根据动态优先级(可用nice()API设置),分配 CPU 运算资源。注意:这类进程比上述两类实时进程优先级低,换言之,在有实时进程存在时,实时进程优先调度。但针对吞吐量优化CFS
SCHED_IDLE优先级最低,在系统空闲时才跑这类进程CFS
SCHED_FIFO先入先出调度算法(实时调度策略),相同优先级的任务先到先服务,高优先级的任务可以抢占低优先级的任务RT
SCHED_RR轮流调度算法(实时调度策略),后 者提供 Roound-Robin 语义,采用时间片,相同优先级的任务当用完时间片会被放到队列尾部,以保证公平性,同样,高优先级的任务可以抢占低优先级的任务。不同要求的实时任务可以根据需要用sched_setscheduler()API 设置策略RT

sched_class:表示调度类

se和rt都是调用实体,一个用于普通进程,一个用于实时进程,每个进程都有其中之一的实体。


(9)进程地址空间(重点来了~!!)

struct mm_struct *mm, *active_mm;

mm:进程所拥有的用户空间内存描述符

active_mm:进程运行时所使用的内存描述符,

注意:

  • 对于普通进程,这两个指针变量相同
  • 对于内核线程,不拥有任何内存描述符,mm成员总是设为NULL
  • 当内核线程运行时,它的active_mm成员被初始化为前一个运行进程的active_mm值

内存描述符(参考我的另一篇博文:http://blog.youkuaiyun.com/tiankong_/article/details/75676131)

进程的存储空间布局(参考我的另一篇博文:http://blog.youkuaiyun.com/tiankong_/article/details/75734817)


(10)信号机制相关字段

/* signal handlers */ struct signal_struct *signal; struct sighand_struct *sighand; sigset_t blocked, real_blocked; sigset_t saved_sigmask; /* restored if set_restore_sigmask() was used */ struct sigpending pending; unsigned long sas_ss_sp; size_t sas_ss_size; int (*notifier)(void *priv); void *notifier_data; sigset_t *notifier_mask;

signal:指向进程的信号描述符 sighand:指向进程的信号处理程序描述符

blocked:表示被阻塞信号的掩码

real_blocked:表示临时掩码


sas_ss_sp:是信号处理程序备用堆栈的地址

sas_ss_size:表示堆栈的大小

pending:


 

struct task_struct {
    volatile long state;    /* -1 unrunnable, 0 runnable, >0 stopped */  /*进程状态*/
    void *stack;
    atomic_t usage;
    unsigned int flags;    /* per process flags, defined below */
    unsigned int ptrace;

    int lock_depth;        /* BKL lock depth */

#ifdef CONFIG_SMP
#ifdef __ARCH_WANT_UNLOCKED_CTXSW
    int oncpu;
#endif
#endif

    int prio, static_prio, normal_prio;
    unsigned int rt_priority;
    const struct sched_class *sched_class;
    struct sched_entity se;
    struct sched_rt_entity rt;

#ifdef CONFIG_PREEMPT_NOTIFIERS
    /* list of struct preempt_notifier: */
    struct hlist_head preempt_notifiers;
#endif

    /*
     * fpu_counter contains the number of consecutive context switches
     * that the FPU is used. If this is over a threshold, the lazy fpu
     * saving becomes unlazy to save the trap. This is an unsigned char
     * so that after 256 times the counter wraps and the behavior turns
     * lazy again; this to deal with bursty apps that only use FPU for
     * a short time
     */
    unsigned char fpu_counter;
#ifdef CONFIG_BLK_DEV_IO_TRACE
    unsigned int btrace_seq;
#endif

    unsigned int policy;
    cpumask_t cpus_allowed;

#ifdef CONFIG_TREE_PREEMPT_RCU
    int rcu_read_lock_nesting;
    char rcu_read_unlock_special;
    struct rcu_node *rcu_blocked_node;
    struct list_head rcu_node_entry;
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */

#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
    struct sched_info sched_info;
#endif

    struct list_head tasks;
    struct plist_node pushable_tasks;

    struct mm_struct *mm, *active_mm;
#if defined(SPLIT_RSS_COUNTING)
    struct task_rss_stat    rss_stat;
#endif
/* task state */
    int exit_state;
    int exit_code, exit_signal;
    int pdeath_signal;  /*  The signal sent when the parent dies  */
    /* ??? */
    unsigned int personality;
    unsigned did_exec:1;
    unsigned in_execve:1;    /* Tell the LSMs that the process is doing an
                 * execve */
    unsigned in_iowait:1;


    /* Revert to default priority/policy when forking */
    unsigned sched_reset_on_fork:1;

    pid_t pid;
    pid_t tgid;

#ifdef CONFIG_CC_STACKPROTECTOR
    /* Canary value for the -fstack-protector gcc feature */
    unsigned long stack_canary;
#endif

    /* 
     * pointers to (original) parent process, youngest child, younger sibling,
     * older sibling, respectively.  (p->father can be replaced with 
     * p->real_parent->pid)
     */
    struct task_struct *real_parent; /* real parent process */
    struct task_struct *parent; /* recipient of SIGCHLD, wait4() reports */
    /*
     * children/sibling forms the list of my natural children
     */
    struct list_head children;    /* list of my children */
    struct list_head sibling;    /* linkage in my parent's children list */
    struct task_struct *group_leader;    /* threadgroup leader */

    /*
     * ptraced is the list of tasks this task is using ptrace on.
     * This includes both natural children and PTRACE_ATTACH targets.
     * p->ptrace_entry is p's link on the p->parent->ptraced list.
     */
    struct list_head ptraced;
    struct list_head ptrace_entry;

    /*
     * This is the tracer handle for the ptrace BTS extension.
     * This field actually belongs to the ptracer task.
     */
    struct bts_context *bts;

    /* PID/PID hash table linkage. */
    struct pid_link pids[PIDTYPE_MAX];
    struct list_head thread_group;

    struct completion *vfork_done;        /* for vfork() */
    int __user *set_child_tid;        /* CLONE_CHILD_SETTID */
    int __user *clear_child_tid;        /* CLONE_CHILD_CLEARTID */

    cputime_t utime, stime, utimescaled, stimescaled;
    cputime_t gtime;
#ifndef CONFIG_VIRT_CPU_ACCOUNTING
    cputime_t prev_utime, prev_stime;
#endif
    unsigned long nvcsw, nivcsw; /* context switch counts */
    struct timespec start_time;         /* monotonic time */
    struct timespec real_start_time;    /* boot based time */
/* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */
    unsigned long min_flt, maj_flt;

    struct task_cputime cputime_expires;
    struct list_head cpu_timers[3];

/* process credentials */
    const struct cred *real_cred;    /* objective and real subjective task
                     * credentials (COW) */
    const struct cred *cred;    /* effective (overridable) subjective task
                     * credentials (COW) */
    struct mutex cred_guard_mutex;    /* guard against foreign influences on
                     * credential calculations
                     * (notably. ptrace) */
    struct cred *replacement_session_keyring; /* for KEYCTL_SESSION_TO_PARENT */

    char comm[TASK_COMM_LEN]; /* executable name excluding path
                     - access with [gs]et_task_comm (which lock
                       it with task_lock())
                     - initialized normally by setup_new_exec */
/* file system info */
    int link_count, total_link_count;
#ifdef CONFIG_SYSVIPC
/* ipc stuff */
    struct sysv_sem sysvsem;
#endif
#ifdef CONFIG_DETECT_HUNG_TASK
/* hung task detection */
    unsigned long last_switch_count;
#endif
/* CPU-specific state of this task */
    struct thread_struct thread;
/* filesystem information */
    struct fs_struct *fs;
/* open file information */
    struct files_struct *files;
/* namespaces */
    struct nsproxy *nsproxy;
/* signal handlers */
    struct signal_struct *signal;
    struct sighand_struct *sighand;

    sigset_t blocked, real_blocked;
    sigset_t saved_sigmask;    /* restored if set_restore_sigmask() was used */
    struct sigpending pending;

    unsigned long sas_ss_sp;
    size_t sas_ss_size;
    int (*notifier)(void *priv);
    void *notifier_data;
    sigset_t *notifier_mask;
    struct audit_context *audit_context;
#ifdef CONFIG_AUDITSYSCALL
    uid_t loginuid;
    unsigned int sessionid;
#endif
    seccomp_t seccomp;

/* Thread group tracking */
       u32 parent_exec_id;
       u32 self_exec_id;
/* Protection of (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed,
 * mempolicy */
    spinlock_t alloc_lock;

#ifdef CONFIG_GENERIC_HARDIRQS
    /* IRQ handler threads */
    struct irqaction *irqaction;
#endif

    /* Protection of the PI data structures: */
    raw_spinlock_t pi_lock;

#ifdef CONFIG_RT_MUTEXES
    /* PI waiters blocked on a rt_mutex held by this task */
    struct plist_head pi_waiters;
    /* Deadlock detection and priority inheritance handling */
    struct rt_mutex_waiter *pi_blocked_on;
#endif

#ifdef CONFIG_DEBUG_MUTEXES
    /* mutex deadlock detection */
    struct mutex_waiter *blocked_on;
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
    unsigned int irq_events;
    unsigned long hardirq_enable_ip;
    unsigned long hardirq_disable_ip;
    unsigned int hardirq_enable_event;
    unsigned int hardirq_disable_event;
    int hardirqs_enabled;
    int hardirq_context;
    unsigned long softirq_disable_ip;
    unsigned long softirq_enable_ip;
    unsigned int softirq_disable_event;
    unsigned int softirq_enable_event;
    int softirqs_enabled;
    int softirq_context;
#endif
#ifdef CONFIG_LOCKDEP
# define MAX_LOCK_DEPTH 48UL
    u64 curr_chain_key;
    int lockdep_depth;
    unsigned int lockdep_recursion;
    struct held_lock held_locks[MAX_LOCK_DEPTH];
    gfp_t lockdep_reclaim_gfp;
#endif

/* journalling filesystem info */
    void *journal_info;

/* stacked block device info */
    struct bio_list *bio_list;

/* VM state */
    struct reclaim_state *reclaim_state;

    struct backing_dev_info *backing_dev_info;

    struct io_context *io_context;

    unsigned long ptrace_message;
    siginfo_t *last_siginfo; /* For ptrace use.  */
    struct task_io_accounting ioac;
#if defined(CONFIG_TASK_XACCT)
    u64 acct_rss_mem1;    /* accumulated rss usage */
    u64 acct_vm_mem1;    /* accumulated virtual memory usage */
    cputime_t acct_timexpd;    /* stime + utime since last update */
#endif
#ifdef CONFIG_CPUSETS
    nodemask_t mems_allowed;    /* Protected by alloc_lock */
    int cpuset_mem_spread_rotor;
#endif
#ifdef CONFIG_CGROUPS
    /* Control Group info protected by css_set_lock */
    struct css_set *cgroups;
    /* cg_list protected by css_set_lock and tsk->alloc_lock */
    struct list_head cg_list;
#endif
#ifdef CONFIG_FUTEX
    struct robust_list_head __user *robust_list;
#ifdef CONFIG_COMPAT
    struct compat_robust_list_head __user *compat_robust_list;
#endif
    struct list_head pi_state_list;
    struct futex_pi_state *pi_state_cache;
#endif
#ifdef CONFIG_PERF_EVENTS
    struct perf_event_context *perf_event_ctxp;
    struct mutex perf_event_mutex;
    struct list_head perf_event_list;
#endif
#ifdef CONFIG_NUMA
    struct mempolicy *mempolicy;    /* Protected by alloc_lock */
    short il_next;
#endif
    atomic_t fs_excl;    /* holding fs exclusive resources */
    struct rcu_head rcu;

    /*
     * cache last used pipe for splice
     */
    struct pipe_inode_info *splice_pipe;
#ifdef    CONFIG_TASK_DELAY_ACCT
    struct task_delay_info *delays;
#endif
#ifdef CONFIG_FAULT_INJECTION
    int make_it_fail;
#endif
    struct prop_local_single dirties;
#ifdef CONFIG_LATENCYTOP
    int latency_record_count;
    struct latency_record latency_record[LT_SAVECOUNT];
#endif
    /*
     * time slack values; these are used to round up poll() and
     * select() etc timeout values. These are in nanoseconds.
     */
    unsigned long timer_slack_ns;
    unsigned long default_timer_slack_ns;

    struct list_head    *scm_work_list;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
    /* Index of current stored address in ret_stack */
    int curr_ret_stack;
    /* Stack of return addresses for return function tracing */
    struct ftrace_ret_stack    *ret_stack;
    /* time stamp for last schedule */
    unsigned long long ftrace_timestamp;
    /*
     * Number of functions that haven't been traced
     * because of depth overrun.
     */
    atomic_t trace_overrun;
    /* Pause for the tracing */
    atomic_t tracing_graph_pause;
#endif
#ifdef CONFIG_TRACING
    /* state flags for use by tracers */
    unsigned long trace;
    /* bitmask of trace recursion */
    unsigned long trace_recursion;
#endif /* CONFIG_TRACING */
#ifdef CONFIG_CGROUP_MEM_RES_CTLR /* memcg uses this to do batch job */
    struct memcg_batch_info {
        int do_batch;    /* incremented when batch uncharge started */
        struct mem_cgroup *memcg; /* target memcg of uncharge */
        unsigned long bytes;         /* uncharged usage */
        unsigned long memsw_bytes; /* uncharged mem+swap usage */
    } memcg_batch;
#endif
};














评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

五癫

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值