Lab4实验报告
开始之前,为了弄懂 mpconfig
与 lapic
,可能需要阅读 Intel processor manual
和 MP Specification
,相关资源可以在课程中找到
Execrise 1
- Implement mmio_map_region in kern/pmap.c.
// mmio_map_region()
uintptr_t ret = base;
size = ROUNDUP(size, PGSIZE);
base = base + size;
if (base >= MMIOLIM) {
panic("larger than MMIOLIM");
}
boot_map_region(kern_pgdir, ret, size, pa, PTE_PCD | PTE_PWT | PTE_W);
return (void *) ret;
映射 MMIO
的部分空间到指定物理地址,size
需要对齐
Exercise 2
- Then modify your implementation of page_init() in kern/pmap.c to avoid adding the page at MPENTRY_PADDR to the free list.
void page_init(void)
{
size_t left_i = PGNUM(IOPHYSMEM);
size_t right_i = PGNUM(PADDR(envs + NENV));
for (size_t i = 1; i < npages; i++) {
if ((i < left_i || i > right_i) && i != PGNUM(MPENTRY_PADDR)) {
pages[i].pp_link = page_free_list;
page_free_list = &pages[i];
}
}
}
在之前的基础上加上 i != PGNUM(MPENTRY_PADDR)
就行了
Question
- 1.Compare kern/mpentry.S side by side with boot/boot.S. Bearing in mind that kern/mpentry.S is compiled and linked to run above KERNBASE just like everything else in the kernel, what is the purpose of macro MPBOOTPHYS? Why is it necessary in kern/mpentry.S but not in boot/boot.S? In other words, what could go wrong if it were omitted in kern/mpentry.S?
由于 AP
没有设置页表,而 BSP
开启了页表,所以需要自己从虚拟地址转换到物理地址
Exercise 3
- Modify mem_init_mp() (in kern/pmap.c) to map per-CPU stacks starting at KSTACKTOP, as shown in inc/memlayout.h.
// mem_init_mp()
for (int i = 0; i != NCPU; ++i) {
uintptr_t kstacktop = KSTACKTOP - i * (KSTKSIZE + KSTKGAP);
boot_map_region(kern_pgdir, kstacktop - KSTKSIZE, KSTKSIZE, PADDR(percpu_kstacks[i]), PTE_W | PTE_P);
}
根据 CPU
数目设置相应的内核栈
Exercise 4
- The code in trap_init_percpu() (kern/trap.c) initializes the TSS and TSS descriptor for the BSP. It worked in Lab 3, but is incorrect when running on other CPUs. Change the code so that it can work on all CPUs.
void trap_init_percpu(void)
{
thiscpu->cpu_ts.ts_esp0 = KSTACKTOP - cpunum() * (KSTKSIZE + KSTKGAP);
thiscpu->cpu_ts.ts_ss0 = GD_KD;
gdt[(GD_TSS0 >> 3) + cpunum()] = SEG16(STS_T32A, (uint32_t) (&(thiscpu->cpu_ts)), sizeof(struct Taskstate) - 1, 0);
gdt[(GD_TSS0 >> 3) + cpunum()].sd_s = 0;
ltr(GD_TSS0 + (cpunum() << 3));
lidt(&idt_pd);
}
初始化每个 CPU
的 TSS
并向 GDT
中加入相应条目
Exercise 5
通过源码可以知道,这里 lock_kernel
的核心是 xchg
指令,它可以原子地设置新值并返回原值,以此来判断获取锁是否成功
- Apply the big kernel lock as described above, by calling lock_kernel() and unlock_kernel() at the proper locations.
// i386_init()
// Your code here:
lock_kernel();
boot_aps();
在唤醒其他 CPU
前需要 lock
,防止唤醒的 CPU
启动进程
// mp_main()
// Your code here:
lock_kernel();
sched_yield();
初始化 AP
后,在调度之前需要 lock
,防止其他 CPU
干扰进程的选择
// trap()
// LAB 4: Your code here.
lock_kernel();
assert(curenv);
用户态引发中断陷入内核态时,需要 lock
// env_run()
lcr3(PADDR(e->env_pgdir));
unlock_kernel();
env_pop_tf(&(e->env_tf));
离开内核态之前,需要 unlock
BPS
启动 AP
前,获取内核锁,所以 AP
会在 mp_main
执行调度之前阻塞,在启动完 AP
后,BPS
执行调度,运行第一个进程,之后释放内核锁,这样一来,其中一个 AP
就可以开始执行调度,若有的话运行进程
Question
- 2.It seems that using the big kernel lock guarantees that only one CPU can run the kernel code at a time. Why do we still need separate kernel stacks for each CPU? Describe a scenario in which using a shared kernel stack will go wrong, even with the protection of the big kernel lock.
如果内核栈中留下不同 CPU
之后需要使用的数据,可能会造成混乱
Exercise 6
- Implement round-robin scheduling in sched_yield() as described above.
// sched.c
// sched_y