checkpoint queue 和 write list

博客聚焦于Oracle中dirty list(write list)和checkpoint queue,指出二者易混淆,并提出探讨二者是否为同一事物及相互关系的问题。还引用大师描述及相关结构信息,如存在10个列表,每种类型缓冲区有主、辅助列表等。

在oracle中存在 dirty list(也就是write list)的说法,但是同时又有checkpoint queue 。这两个东西往往让人容易混淆。他们是同一个东西吗?之间关系如何?

在阐述之前,我们先来看一段大师的描述

http://www.ixora.com.au/q+a/0103/07160329.htm

That information is not quite right. It is more accurate to say that there are 10 lists, because there are 5 types of buffers and there is a MAIN and AUXILIARY list for each type. Also, there is no separate hot list. The hot list is a sublist of the main replacement list. Of course, there are separate lists for each working set (LRU latch set) of buffers.

You can see all these structures in BUFFERS dumps. Here is an extract ...

(WS) size: 10000 wsid: 1 state: 0 (WS_REPL_LIST) main_prev: 32aefbc main_next: 32ae51c aux_prev: 32c94a0 aux_next: 32c94a0 curnum: 10000 auxnum: 0 cold: 32a307c hbmax: 5000 hbufs: 5000 (WS_WRITE_LIST) main_prev: 32c94b4 main_next: 32c94b4 aux_prev: 32c94bc aux_next: 32c94bc curnum: 0 auxnum: 0 (WS_PING_LIST) main_prev: 32c94d0 main_next: 32c94d0 aux_prev: 32c94d8 aux_next: 32c94d8 curnum: 0 auxnum: 0 (WS_XOBJ_LIST) main_prev: 32c94ec main_next: 32c94ec aux_prev: 32c94f4 aux_next: 32c94f4 curnum: 0 auxnum: 0 (WS_XRNG_LIST) main_prev: 32c9508 main_next: 32c9508 aux_prev: 32c9510 aux_next: 32c9510 curnum: 0 auxnum: 0


Other than the replacement lists, the other lists are for different types of write buffers. From the bottom up they are buffers that have to be written for a checkpoint range call, a checkpoint object call, a ping, or merely because they are dirty. In each case the main list holds buffers which have yet to be written, and the auxiliary list holds buffers for which a write is pending. The auxiliary replacement list holds unpinned buffers for which a write has been completed, and any other buffers that are immediately reusable. The main replacement list is what is commonly called the LRU list. The operation of the LRU list was explained under the heading The 8i buffer cache in the October issue of Ixora News. You may also be interested in the section on DBWn's new tricks in the September issue.

These 10 lists are mutually exclusive. Each buffer can only be on one of the lists at a time. For completeness it should also be mentioned that dirty buffers are also on a thread checkpoint queue and a file checkpoint queue. These queues are maintained in first modification (low RBA) order and are used to optimize checkpoint processing.

也就是说,在oracle 8 中,buffer cache 中实际存在着5大类lists,加辅助的lists共10小类。其中的4大类(8小类lists)统称write list。他们记录了dirty buffer 的各种状态信息。而通常的LRU list 只有一个大类,实际共两小类lists。也就是 replacement lists 。实际上,在9i开始版本,lists又有所增加,大致是12小类lists,这里就不过多的阐述了。(注意这里都说的是类而不是条数,因为条数实际上和 db_block_lru_latches参数有关,9i中这是隐藏参数)

在4大类的write lists 中,记录了buffer的状态(they are buffers that have to be written for a checkpoint range call, a checkpoint object call, a ping, or merely because they are dirty) 。由于write lists 中的buffer 的状态可能是转换的,也是没有时间顺序的,也就是说,如果按照这个lists写buffer到磁盘,其顺序和buffer变化发生的时间顺序并不是一致的。这也直接造成在oracle 8以前版本中,检查点都是 完全刷新 dirty buffer 的,也就是说检查点发生的时候,所有dirty buffer都要被写入磁盘,而且这个时候是不能有dirty buffer产生的,否则oracle将无法知道是否将所有的检查点时刻之前的dirty buffer 都写入了磁盘。

从oracle 8开始,出现了checkpoint queue 。checkpoint queue 是按照buffer的 low RBA 排序的,也就是说按照buffer 第一次发生变化的时候的时间点排序的。同一个dirty buffer ,既存在与 write lists中(记录着buffer的状态)又存在于checkpoint queue 中,记录着buffer 第一次发生变化的时间顺序。这样dbwr根据checkpoint queue中顺序写出dirty buffer 就一定能保证是按照 buffer 首次发生变化的时间顺序写到磁盘的。这样的一个机制,就支持了oracle的增量检查点的功能的实现。也就是说,当增量检查点发生的时候,只确定写的buffer的结束时间点,在这个检查点过程中,可以继续产生dirty buffer,dbwr也不是一次要把 所有dirty buffer 全部写到磁盘。这样大大地提高了检查点的效率。

实际上,checkpoint queue 与 write list 的配合,更是可以在实现增量检查点的同时模拟异步IO。这个问题就留待以后进行讨论了。

 

关于checkpoint的更多的内容,请参考

http://blog.youkuaiyun.com/biti_rainy/archive/2004/07/12/learn_oracle_20040712_1.aspx


 

4463 static int f2fs_fill_super(struct super_block *sb, void *data, int silent) 4464 { 4465 struct f2fs_sb_info *sbi; 4466 struct f2fs_super_block *raw_super; 4467 struct inode *root; 4468 int err; 4469 bool skip_recovery = false, need_fsck = false; 4470 char *options = NULL; 4471 int recovery, i, valid_super_block; 4472 struct curseg_info *seg_i; 4473 int retry_cnt = 1; 4474 #ifdef CONFIG_QUOTA 4475 bool quota_enabled = false; 4476 #endif 4477 4478 try_onemore: 4479 err = -EINVAL; 4480 raw_super = NULL; 4481 valid_super_block = -1; 4482 recovery = 0; 4483 4484 /* allocate memory for f2fs-specific super block info */ 4485 sbi = kzalloc(sizeof(struct f2fs_sb_info), GFP_KERNEL); 4486 if (!sbi) 4487 return -ENOMEM; 4488 4489 sbi->sb = sb; 4490 4491 /* initialize locks within allocated memory */ 4492 init_f2fs_rwsem(&sbi->gc_lock); 4493 mutex_init(&sbi->writepages); 4494 init_f2fs_rwsem(&sbi->cp_global_sem); 4495 init_f2fs_rwsem(&sbi->node_write); 4496 init_f2fs_rwsem(&sbi->node_change); 4497 spin_lock_init(&sbi->stat_lock); 4498 init_f2fs_rwsem(&sbi->cp_rwsem); 4499 init_f2fs_rwsem(&sbi->quota_sem); 4500 init_waitqueue_head(&sbi->cp_wait); 4501 spin_lock_init(&sbi->error_lock); 4502 4503 for (i = 0; i < NR_INODE_TYPE; i++) { 4504 INIT_LIST_HEAD(&sbi->inode_list[i]); 4505 spin_lock_init(&sbi->inode_lock[i]); 4506 } 4507 mutex_init(&sbi->flush_lock); 4508 4509 /* Load the checksum driver */ 4510 sbi->s_chksum_driver = crypto_alloc_shash("crc32", 0, 0); 4511 if (IS_ERR(sbi->s_chksum_driver)) { 4512 f2fs_err(sbi, "Cannot load crc32 driver."); 4513 err = PTR_ERR(sbi->s_chksum_driver); 4514 sbi->s_chksum_driver = NULL; 4515 goto free_sbi; 4516 } 4517 4518 /* set a block size */ 4519 if (unlikely(!sb_set_blocksize(sb, F2FS_BLKSIZE))) { 4520 f2fs_err(sbi, "unable to set blocksize"); 4521 goto free_sbi; 4522 } 4523 4524 err = read_raw_super_block(sbi, &raw_super, &valid_super_block, 4525 &recovery); 4526 if (err) 4527 goto free_sbi; 4528 4529 sb->s_fs_info = sbi; 4530 sbi->raw_super = raw_super; 4531 4532 INIT_WORK(&sbi->s_error_work, f2fs_record_error_work); 4533 memcpy(sbi->errors, raw_super->s_errors, MAX_F2FS_ERRORS); 4534 memcpy(sbi->stop_reason, raw_super->s_stop_reason, MAX_STOP_REASON); 4535 4536 /* precompute checksum seed for metadata */ 4537 if (f2fs_sb_has_inode_chksum(sbi)) 4538 sbi->s_chksum_seed = f2fs_chksum(sbi, ~0, raw_super->uuid, 4539 sizeof(raw_super->uuid)); 4540 4541 default_options(sbi, false); 4542 /* parse mount options */ 4543 options = kstrdup((const char *)data, GFP_KERNEL); 4544 if (data && !options) { 4545 err = -ENOMEM; 4546 goto free_sb_buf; 4547 } 4548 4549 err = parse_options(sb, options, false); 4550 if (err) 4551 goto free_options; 4552 4553 sb->s_maxbytes = max_file_blocks(NULL) << 4554 le32_to_cpu(raw_super->log_blocksize); 4555 sb->s_max_links = F2FS_LINK_MAX; 4556 4557 err = f2fs_setup_casefold(sbi); 4558 if (err) 4559 goto free_options; 4560 4561 #ifdef CONFIG_QUOTA 4562 sb->dq_op = &f2fs_quota_operations; 4563 sb->s_qcop = &f2fs_quotactl_ops; 4564 sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP | QTYPE_MASK_PRJ; 4565 4566 if (f2fs_sb_has_quota_ino(sbi)) { 4567 for (i = 0; i < MAXQUOTAS; i++) { 4568 if (f2fs_qf_ino(sbi->sb, i)) 4569 sbi->nquota_files++; 4570 } 4571 } 4572 #endif 4573 4574 sb->s_op = &f2fs_sops; 4575 #ifdef CONFIG_FS_ENCRYPTION 4576 sb->s_cop = &f2fs_cryptops; 4577 #endif 4578 #ifdef CONFIG_FS_VERITY 4579 sb->s_vop = &f2fs_verityops; 4580 #endif 4581 sb->s_xattr = f2fs_xattr_handlers; 4582 sb->s_export_op = &f2fs_export_ops; 4583 sb->s_magic = F2FS_SUPER_MAGIC; 4584 sb->s_time_gran = 1; 4585 sb->s_flags = (sb->s_flags & ~SB_POSIXACL) | 4586 (test_opt(sbi, POSIX_ACL) ? SB_POSIXACL : 0); 4587 memcpy(&sb->s_uuid, raw_super->uuid, sizeof(raw_super->uuid)); 4588 sb->s_iflags |= SB_I_CGROUPWB; 4589 4590 /* init f2fs-specific super block info */ 4591 sbi->valid_super_block = valid_super_block; 4592 4593 /* disallow all the data/node/meta page writes */ 4594 set_sbi_flag(sbi, SBI_POR_DOING); 4595 4596 err = f2fs_init_write_merge_io(sbi); 4597 if (err) 4598 goto free_bio_info; 4599 4600 init_sb_info(sbi); 4601 4602 err = f2fs_init_iostat(sbi); 4603 if (err) 4604 goto free_bio_info; 4605 4606 err = init_percpu_info(sbi); 4607 if (err) 4608 goto free_iostat; 4609 4610 /* init per sbi slab cache */ 4611 err = f2fs_init_xattr_caches(sbi); 4612 if (err) 4613 goto free_percpu; 4614 err = f2fs_init_page_array_cache(sbi); 4615 if (err) 4616 goto free_xattr_cache; 4617 4618 /* get an inode for meta space */ 4619 sbi->meta_inode = f2fs_iget(sb, F2FS_META_INO(sbi)); 4620 if (IS_ERR(sbi->meta_inode)) { 4621 f2fs_err(sbi, "Failed to read F2FS meta data inode"); 4622 err = PTR_ERR(sbi->meta_inode); 4623 goto free_page_array_cache; 4624 } 4625 4626 err = f2fs_get_valid_checkpoint(sbi); 4627 if (err) { 4628 f2fs_err(sbi, "Failed to get valid F2FS checkpoint"); 4629 goto free_meta_inode; 4630 } 4631 4632 if (__is_set_ckpt_flags(F2FS_CKPT(sbi), CP_QUOTA_NEED_FSCK_FLAG)) 4633 set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR); 4634 if (__is_set_ckpt_flags(F2FS_CKPT(sbi), CP_DISABLED_QUICK_FLAG)) { 4635 set_sbi_flag(sbi, SBI_CP_DISABLED_QUICK); 4636 sbi->interval_time[DISABLE_TIME] = DEF_DISABLE_QUICK_INTERVAL; 4637 } 4638 4639 if (__is_set_ckpt_flags(F2FS_CKPT(sbi), CP_FSCK_FLAG)) 4640 set_sbi_flag(sbi, SBI_NEED_FSCK); 4641 4642 /* Initialize device list */ 4643 err = f2fs_scan_devices(sbi); 4644 if (err) { 4645 f2fs_err(sbi, "Failed to find devices"); 4646 goto free_devices; 4647 } 4648 4649 err = f2fs_init_post_read_wq(sbi); 4650 if (err) { 4651 f2fs_err(sbi, "Failed to initialize post read workqueue"); 4652 goto free_devices; 4653 } 4654 4655 sbi->total_valid_node_count = 4656 le32_to_cpu(sbi->ckpt->valid_node_count); 4657 percpu_counter_set(&sbi->total_valid_inode_count, 4658 le32_to_cpu(sbi->ckpt->valid_inode_count)); 4659 sbi->user_block_count = le64_to_cpu(sbi->ckpt->user_block_count); 4660 sbi->total_valid_block_count = 4661 le64_to_cpu(sbi->ckpt->valid_block_count); 4662 sbi->last_valid_block_count = sbi->total_valid_block_count; 4663 sbi->reserved_blocks = 0; 4664 sbi->current_reserved_blocks = 0; 4665 limit_reserve_root(sbi); 4666 adjust_unusable_cap_perc(sbi); 4667 4668 f2fs_init_extent_cache_info(sbi); 4669 4670 f2fs_init_ino_entry_info(sbi); 4671 4672 f2fs_init_fsync_node_info(sbi); 4673 4674 /* setup checkpoint request control and start checkpoint issue thread */ 4675 f2fs_init_ckpt_req_control(sbi); 4676 if (!f2fs_readonly(sb) && !test_opt(sbi, DISABLE_CHECKPOINT) && 4677 test_opt(sbi, MERGE_CHECKPOINT)) { 4678 err = f2fs_start_ckpt_thread(sbi); 4679 if (err) { 4680 f2fs_err(sbi, 4681 "Failed to start F2FS issue_checkpoint_thread (%d)", 4682 err); 4683 goto stop_ckpt_thread; 4684 } 4685 } 4686 4687 /* setup f2fs internal modules */ 4688 err = f2fs_build_segment_manager(sbi); 4689 if (err) { 4690 f2fs_err(sbi, "Failed to initialize F2FS segment manager (%d)", 4691 err); 4692 goto free_sm; 4693 } 4694 err = f2fs_build_node_manager(sbi); 4695 if (err) { 4696 f2fs_err(sbi, "Failed to initialize F2FS node manager (%d)", 4697 err); 4698 goto free_nm; 4699 } 4700 4701 /* For write statistics */ 4702 sbi->sectors_written_start = f2fs_get_sectors_written(sbi); 4703 4704 /* Read accumulated write IO statistics if exists */ 4705 seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE); 4706 if (__exist_node_summaries(sbi)) 4707 sbi->kbytes_written = 4708 le64_to_cpu(seg_i->journal->info.kbytes_written); 4709 4710 f2fs_build_gc_manager(sbi); 4711 4712 err = f2fs_build_stats(sbi); 4713 if (err) 4714 goto free_nm; 4715 4716 /* get an inode for node space */ 4717 sbi->node_inode = f2fs_iget(sb, F2FS_NODE_INO(sbi)); 4718 if (IS_ERR(sbi->node_inode)) { 4719 f2fs_err(sbi, "Failed to read node inode"); 4720 err = PTR_ERR(sbi->node_inode); 4721 goto free_stats; 4722 } 4723 4724 /* read root inode and dentry */ 4725 root = f2fs_iget(sb, F2FS_ROOT_INO(sbi)); 4726 if (IS_ERR(root)) { 4727 f2fs_err(sbi, "Failed to read root inode"); 4728 err = PTR_ERR(root); 4729 goto free_node_inode; 4730 } 4731 if (!S_ISDIR(root->i_mode) || !root->i_blocks || 4732 !root->i_size || !root->i_nlink) { 4733 iput(root); 4734 err = -EINVAL; 4735 goto free_node_inode; 4736 } 4737 4738 sb->s_root = d_make_root(root); /* allocate root dentry */ 4739 if (!sb->s_root) { 4740 err = -ENOMEM; 4741 goto free_node_inode; 4742 } 4743 4744 err = f2fs_init_compress_inode(sbi); 4745 if (err) 4746 goto free_root_inode; 4747 4748 err = f2fs_register_sysfs(sbi); 4749 if (err) 4750 goto free_compress_inode; 4751 4752 #ifdef CONFIG_QUOTA 4753 /* Enable quota usage during mount */ 4754 if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) { 4755 err = f2fs_enable_quotas(sb); 4756 if (err) 4757 f2fs_err(sbi, "Cannot turn on quotas: error %d", err); 4758 } 4759 4760 quota_enabled = f2fs_recover_quota_begin(sbi); 4761 #endif 4762 /* if there are any orphan inodes, free them */ 4763 err = f2fs_recover_orphan_inodes(sbi); 4764 if (err) 4765 goto free_meta; 4766 4767 if (unlikely(is_set_ckpt_flags(sbi, CP_DISABLED_FLAG))) 4768 goto reset_checkpoint; 4769 4770 /* recover fsynced data */ 4771 if (!test_opt(sbi, DISABLE_ROLL_FORWARD) && 4772 !test_opt(sbi, NORECOVERY)) { 4773 /* 4774 * mount should be failed, when device has readonly mode, and 4775 * previous checkpoint was not done by clean system shutdown. 4776 */ 4777 if (f2fs_hw_is_readonly(sbi)) { 4778 if (!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) { 4779 err = f2fs_recover_fsync_data(sbi, true); 4780 if (err > 0) { 4781 err = -EROFS; 4782 f2fs_err(sbi, "Need to recover fsync data, but " 4783 "write access unavailable, please try " 4784 "mount w/ disable_roll_forward or norecovery"); 4785 } 4786 if (err < 0) 4787 goto free_meta; 4788 } 4789 f2fs_info(sbi, "write access unavailable, skipping recovery"); 4790 goto reset_checkpoint; 4791 } 4792 4793 if (need_fsck) 4794 set_sbi_flag(sbi, SBI_NEED_FSCK); 4795 4796 if (skip_recovery) 4797 goto reset_checkpoint; 4798 4799 err = f2fs_recover_fsync_data(sbi, false); 4800 if (err < 0) { 4801 if (err != -ENOMEM) 4802 skip_recovery = true; 4803 need_fsck = true; 4804 f2fs_err(sbi, "Cannot recover all fsync data errno=%d", 4805 err); 4806 goto free_meta; 4807 } 4808 } else { 4809 err = f2fs_recover_fsync_data(sbi, true); 4810 4811 if (!f2fs_readonly(sb) && err > 0) { 4812 err = -EINVAL; 4813 f2fs_err(sbi, "Need to recover fsync data"); 4814 goto free_meta; 4815 } 4816 } 4817 4818 #ifdef CONFIG_QUOTA 4819 f2fs_recover_quota_end(sbi, quota_enabled); 4820 #endif 4821 reset_checkpoint: 4822 /* 4823 * If the f2fs is not readonly and fsync data recovery succeeds, 4824 * check zoned block devices' write pointer consistency. 4825 */ 4826 if (f2fs_sb_has_blkzoned(sbi) && !f2fs_readonly(sb)) { 4827 int err2; 4828 4829 f2fs_notice(sbi, "Checking entire write pointers"); 4830 err2 = f2fs_check_write_pointer(sbi); 4831 if (err2) 4832 err = err2; 4833 } 4834 if (err) 4835 goto free_meta; 4836 4837 err = f2fs_init_inmem_curseg(sbi); 4838 if (err) 4839 goto sync_free_meta; 4840 4841 /* f2fs_recover_fsync_data() cleared this already */ 4842 clear_sbi_flag(sbi, SBI_POR_DOING); 4843 4844 if (test_opt(sbi, DISABLE_CHECKPOINT)) { 4845 err = f2fs_disable_checkpoint(sbi); 4846 if (err) 4847 goto sync_free_meta; 4848 } else if (is_set_ckpt_flags(sbi, CP_DISABLED_FLAG)) { 4849 f2fs_enable_checkpoint(sbi); 4850 } 4851 4852 /* 4853 * If filesystem is not mounted as read-only then 4854 * do start the gc_thread. 4855 */ 4856 if ((F2FS_OPTION(sbi).bggc_mode != BGGC_MODE_OFF || 4857 test_opt(sbi, GC_MERGE)) && !f2fs_readonly(sb)) { 4858 /* After POR, we can run background GC thread.*/ 4859 err = f2fs_start_gc_thread(sbi); 4860 if (err) 4861 goto sync_free_meta; 4862 } 4863 kvfree(options); 4864 4865 /* recover broken superblock */ 4866 if (recovery) { 4867 err = f2fs_commit_super(sbi, true); 4868 f2fs_info(sbi, "Try to recover %dth superblock, ret: %d", 4869 sbi->valid_super_block ? 1 : 2, err); 4870 } 4871 4872 f2fs_join_shrinker(sbi); 4873 4874 f2fs_tuning_parameters(sbi); 4875 4876 f2fs_notice(sbi, "Mounted with checkpoint version = %llx", 4877 cur_cp_version(F2FS_CKPT(sbi))); 4878 f2fs_update_time(sbi, CP_TIME); 4879 f2fs_update_time(sbi, REQ_TIME); 4880 clear_sbi_flag(sbi, SBI_CP_DISABLED_QUICK); 4881 4882 cleancache_init_fs(sb); 4883 return 0; 4884 4885 sync_free_meta: 4886 /* safe to flush all the data */ 4887 sync_filesystem(sbi->sb); 4888 retry_cnt = 0; 4889 4890 free_meta: 4891 #ifdef CONFIG_QUOTA 4892 f2fs_truncate_quota_inode_pages(sb); 4893 if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) 4894 f2fs_quota_off_umount(sbi->sb); 4895 #endif 4896 /* 4897 * Some dirty meta pages can be produced by f2fs_recover_orphan_inodes() 4898 * failed by EIO. Then, iput(node_inode) can trigger balance_fs_bg() 4899 * followed by f2fs_write_checkpoint() through f2fs_write_node_pages(), which 4900 * falls into an infinite loop in f2fs_sync_meta_pages(). 4901 */ 4902 truncate_inode_pages_final(META_MAPPING(sbi)); 4903 /* evict some inodes being cached by GC */ 4904 evict_inodes(sb); 4905 f2fs_unregister_sysfs(sbi); 4906 free_compress_inode: 4907 f2fs_destroy_compress_inode(sbi); 4908 free_root_inode: 4909 dput(sb->s_root); 4910 sb->s_root = NULL; 4911 free_node_inode: 4912 f2fs_release_ino_entry(sbi, true); 4913 truncate_inode_pages_final(NODE_MAPPING(sbi)); 4914 iput(sbi->node_inode); 4915 sbi->node_inode = NULL; 4916 free_stats: 4917 f2fs_destroy_stats(sbi); 4918 free_nm: 4919 /* stop discard thread before destroying node manager */ 4920 f2fs_stop_discard_thread(sbi); 4921 f2fs_destroy_node_manager(sbi); 4922 free_sm: 4923 f2fs_destroy_segment_manager(sbi); 4924 stop_ckpt_thread: 4925 f2fs_stop_ckpt_thread(sbi); 4926 /* flush s_error_work before sbi destroy */ 4927 flush_work(&sbi->s_error_work); 4928 f2fs_destroy_post_read_wq(sbi); 4929 free_devices: 4930 destroy_device_list(sbi); 4931 kvfree(sbi->ckpt); 4932 free_meta_inode: 4933 make_bad_inode(sbi->meta_inode); 4934 iput(sbi->meta_inode); 4935 sbi->meta_inode = NULL; 4936 free_page_array_cache: 4937 f2fs_destroy_page_array_cache(sbi); 4938 free_xattr_cache: 4939 f2fs_destroy_xattr_caches(sbi); 4940 free_percpu: 4941 destroy_percpu_info(sbi); 4942 free_iostat: 4943 f2fs_destroy_iostat(sbi); 4944 free_bio_info: 4945 for (i = 0; i < NR_PAGE_TYPE; i++) 4946 kvfree(sbi->write_io[i]); 4947 4948 #if IS_ENABLED(CONFIG_UNICODE) 4949 utf8_unload(sb->s_encoding); 4950 sb->s_encoding = NULL; 4951 #endif 4952 free_options: 4953 #ifdef CONFIG_QUOTA 4954 for (i = 0; i < MAXQUOTAS; i++) 4955 kfree(F2FS_OPTION(sbi).s_qf_names[i]); 4956 #endif 4957 fscrypt_free_dummy_policy(&F2FS_OPTION(sbi).dummy_enc_policy); 4958 kvfree(options); 4959 free_sb_buf: 4960 kfree(raw_super); 4961 free_sbi: 4962 if (sbi->s_chksum_driver) 4963 crypto_free_shash(sbi->s_chksum_driver); 4964 kfree(sbi); 4965 sb->s_fs_info = NULL; 4966 4967 /* give only one another chance */ 4968 if (retry_cnt > 0 && skip_recovery) { 4969 retry_cnt--; 4970 shrink_dcache_sb(sb); 4971 goto try_onemore; 4972 } 4973 return err; 4974 } 4975
最新发布
06-18
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值