内核栈空间和用户栈空间

在Linux系统中,每个线程都拥有独立的用户栈和内核栈空间。用户栈用于线程的正常执行,而内核栈在进行特权操作时使用,以确保安全性。内核栈不信任用户栈,因此在切换到内核模式时会使用自己控制的栈。中断服务例程中的局部变量将存储在内核栈上。每个线程的栈空间都是独立的,调度和协调由操作系统负责,而进程可以通过多种方式影响线程的调度和上下文。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

内核栈空间和用户栈空间 kernel stack and user space stack

简而言之,一个程序至少有一个进程,一个进程至少有一个线程. 线程的划分尺度小于进程,使得多线程程序的并发性高。
另外,进程在执行过程中拥有独立的内存单元,而多个线程共享内存,从而极大地提高了程序的运行效率。
同一个进程的多个子线程在进程的共享内存中分配独立的栈空间
栈:是个线程独有的,保存其运行状态和局部自动变量的。栈在线程开始的时候初始化,每个线程的栈互相独立,因此,栈是thread safe的。每个C++对象的数据成员也存在在栈中,每个函数都有自己的栈,栈被用来在函数之间传递参数。操作系统在切换线程的时候会自动的切换栈,就是切换SS/ESP寄存器。栈空间不需要在高级语言里面显式的分配和释放。
在Linux系统上每一个线程,实际上都有独立的2个栈空间,用户栈空间和内核栈空间。(注意是每个线程一个)
进程和线程在Linux上的唯一区别就是同一个进程的不同线程共享进程的地址空间,仅此而已。

http://stackoverflow.com/questions/12911841/kernel-stack-and-user-space-stack

What’s the difference between kernel stack and user stack ? In short,
nothing - apart from using a different location in memory (and hence a
different value for the stackpointer register), and usually different
memory access protections. I.e. when executing in user mode, kernel
memory (part of which is the kernel stack) will not be accessible even
if mapped. Vice versa, without explicitly being requested by the
kernel code (in Linux, through functions like copy_from_user()), user
memory (including the user stack) is not usually directly accessible.

Why is [ a separate ] kernel stack used ? Separation of privileges and
security. For one, userspace programs can make their stack(pointer)
anything they want, and there is usually no architectural requirement
to even have a valid one. The kernel therefore cannot trust the
userspace stackpointer to be valid nor usable, and therefore will
require one set under its own control. Different CPU architectures
implement this in different ways; x86 CPUs automatically switch
stackpointers when privilege mode switches occur, and the values to be
used for different privilege levels are configurable - by privileged
code (i.e. only the kernel).

If a local variable is declared in an ISR, where will it be stored ?
On the kernel stack. The kernel (Linux kernel, that is) does not hook
ISRs directly to the x86 architecture’s interrupt gates but instead
delegates the interrupt dispatch to a common kernel interrupt
entry/exit mechanism which saves pre-interrupt register state before
calling the registered handler(s). The CPU itself when dispatching an
interrupt might execute a privilege and/or stack switch, and this is
used/set up by the kernel so that the common interrupt entry code can
already rely on a kernel stack being present. That said, interrupts
that occur while executing kernel code will simply (continue to) use
the kernel stack in place at that point. This can, if interrupt
handlers have deeply nested call paths, lead to stack overflows (if a
deep kernel call path is interrupted and the handler causes another
deep path; in Linux, filesystem / software RAID code being interrupted
by network code with iptables active is known to trigger such in
untuned older kernels … solution is to increase kernel stack sizes
for such workloads).

Does each process have its own kernel stack ? Not just each process -
each thread has its own kernel stack (and, in fact, its own user stack
as well). Remember the only difference between processes and threads
(to Linux) is the fact that multiple threads can share an address
space (forming a process).

How does the process coordinate between both these stacks ? Not at all
- it doesn’t need to. Scheduling (how / when different threads are being run, how their state is saved and restored) is the operating
system’s task and processes don’t need to concern themselves with
this. As threads are created (and each process must have at least one
thread), the kernel creates kernel stacks for them, while userspace
stacks are either explicitly created/provided by whichever mechanism
is used to create a thread (functions like makecontext() or
pthread_create() allow the caller to specify a memory region to be
used for the “child” thread’s stack), or inherited (by on-access
memory cloning, usually called “copy on write” / COW, when creating a
new process). That said, the process can influence scheduling of its
threads and/or influence the context (state, amongst that is the
thread’s stackpointer). There are multiple ways for this: UNIX
signals, setcontext(), pthread_yield() / pthread_cancel(), … - but
this is disgressing a bit from the original question.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值