mysql 8.0 + The Auto-extending innodb_system data file ‘./ibdata1‘ is of a different size 768 pages

这篇博客讲述了作者在安装并配置MySQL 8.0.13时遇到的问题,由于my.cnf中innodb_data_file_path设置过大导致启动失败。通过调整为12M并重启,解决了InnoDB自动扩展数据文件大小不符配置的错误。

一、安装8.0.13过程

linux 下载 安装 mysql 8.0+ (tar.xz)_ycsdn10的博客-优快云博客

二、主从配置产生问题

        mysql 8.0.13在进行主从配置的时候,发现没有my.cnf,从网上找了创建了文件 /etc/my.cnf,然后发现启动报错

        在对应data下有一个xxx.err文件,cat xxx.err 打开后可以看到报错日志

 

         内容是:

2021-12-15T23:21:54.621332+08:00 1 [ERROR] [MY-012263] [InnoDB] The Auto-extending innodb_system data file './ibdata1' is of a different size 768 pages (rounded down to MB) than specified in the .cnf file: initial 65536 pages, max 0 (relevant if non-zero) pages!
2021-12-15T23:21:54.622550+08:00 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error.
2021-12-15T23:21:55.223052+08:00 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
2021-12-15T23:21:55.223333+08:00 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.

三、分析及解决      

          从上面可以看出innodb 相关配置有问题,看了下实际my.cnf发现里面的innodb_data_file_path = ibdata1:1G:autoextend  设置得太大,我这机器容量小,所以调到12M

        重启

mysql.server restart

         可以看到第一次重启因为有错误信息,所以关闭失败,但重启没有问题。不放心,可以重新再试一次

(nnunet_env) jzuser@vpc87-3:~/Work_dir/Gn/pystudy/nnUNet/nnUNet$ ls -R .: documentation LICENSE nnunetv2 nnunetv2.egg-info pyproject.toml readme.md setup.py UNKNOWN.egg-info ./documentation: assets dataset_format.md __init__.py run_inference_with_pretrained_models.md benchmarking.md explanation_normalization.md installation_instructions.md set_environment_variables.md changelog.md explanation_plans_files.md manual_data_splits.md setting_up_paths.md competitions extending_nnunet.md pretraining_and_finetuning.md tldr_migration_guide_from_v1.md convert_msd_dataset.md how_to_use_nnunet.md region_based_training.md dataset_format_inference.md ignore_label.md resenc_presets.md ./documentation/assets: amos2022_sparseseg10_2d.png dkfz_logo.png nnUNetMagician.png regions_vs_labels.png sparse_annotation_amos.png amos2022_sparseseg10.png HI_Logo.png nnU-Net_overview.png scribble_example.png ./documentation/competitions: AortaSeg24.md AutoPETII.md FLARE24 __init__.py Toothfairy2 ./documentation/competitions/FLARE24: __init__.py Task_1 Task_2 ./documentation/competitions/FLARE24/Task_1: inference_flare_task1.py __init__.py readme.md ./documentation/competitions/FLARE24/Task_2: inference_flare_task2.py __init__.py readme.md ./documentation/competitions/Toothfairy2: inference_script_semseg_only_customInf2.py __init__.py readme.md ./nnunetv2: batch_running ensembling imageio model_sharing preprocessing training configuration.py evaluation inference paths.py run utilities dataset_conversion experiment_planning __init__.py postprocessing tests ./nnunetv2/batch_running: benchmarking collect_results_custom_Decathlon.py __init__.py release_trainings collect_results_custom_Decathlon_2d.py generate_lsf_runs_customDecathlon.py jobs.sh ./nnunetv2/batch_running/benchmarking: generate_benchmarking_commands.py __init__.py summarize_benchmark_results.py ./nnunetv2/batch_running/release_trainings: __init__.py nnunetv2_v1 ./nnunetv2/batch_running/release_trainings/nnunetv2_v1: collect_results.py generate_lsf_commands.py __init__.py ./nnunetv2/dataset_conversion: convert_MSD_dataset.py Dataset114_MNMs.py Dataset223_AMOS2022postChallenge.py convert_raw_dataset_from_old_nnunet_format.py Dataset115_EMIDEC.py Dataset224_AbdomenAtlas1.0.py Dataset015_018_RibFrac_RibSeg.py Dataset119_ToothFairy2_All.py Dataset226_BraTS2024-BraTS-GLI.py Dataset021_CTAAorta.py Dataset120_RoadSegmentation.py Dataset227_TotalSegmentatorMRI.py Dataset023_AbdomenAtlas1_1Mini.py Dataset137_BraTS21.py Dataset987_dummyDataset4.py Dataset027_ACDC.py Dataset218_Amos2022_task1.py Dataset989_dummyDataset4_2.py Dataset042_BraTS18.py Dataset219_Amos2022_task2.py datasets_for_integration_tests Dataset043_BraTS19.py Dataset220_KiTS2023.py generate_dataset_json.py Dataset073_Fluo_C3DH_A549_SIM.py Dataset221_AutoPETII_2023.py __init__.py ./nnunetv2/dataset_conversion/datasets_for_integration_tests: Dataset996_IntegrationTest_Hippocampus_regions_ignore.py Dataset998_IntegrationTest_Hippocampus_ignore.py __init__.py Dataset997_IntegrationTest_Hippocampus_regions.py Dataset999_IntegrationTest_Hippocampus.py ./nnunetv2/ensembling: ensemble.py __init__.py ./nnunetv2/evaluation: accumulate_cv_results.py evaluate_predictions.py find_best_configuration.py __init__.py ./nnunetv2/experiment_planning: dataset_fingerprint __init__.py plan_and_preprocess_entrypoints.py verify_dataset_integrity.py experiment_planners plan_and_preprocess_api.py plans_for_pretraining ./nnunetv2/experiment_planning/dataset_fingerprint: fingerprint_extractor.py __init__.py ./nnunetv2/experiment_planning/experiment_planners: default_experiment_planner.py __init__.py network_topology.py resampling resencUNet_planner.py residual_unets ./nnunetv2/experiment_planning/experiment_planners/resampling: __init__.py planners_no_resampling.py resample_with_torch.py ./nnunetv2/experiment_planning/experiment_planners/residual_unets: __init__.py residual_encoder_unet_planners.py ./nnunetv2/experiment_planning/plans_for_pretraining: __init__.py move_plans_between_datasets.py ./nnunetv2/imageio: base_reader_writer.py natural_image_reader_writer.py reader_writer_registry.py simpleitk_reader_writer.py __init__.py nibabel_reader_writer.py readme.md tif_reader_writer.py ./nnunetv2/inference: data_iterators.py export_prediction.py JHU_inference.py readme.md examples.py __init__.py predict_from_raw_data.py sliding_window_prediction.py ./nnunetv2/model_sharing: entry_points.py __init__.py model_download.py model_export.py model_import.py ./nnunetv2/postprocessing: __init__.py remove_connected_components.py ./nnunetv2/preprocessing: cropping __init__.py normalization preprocessors resampling ./nnunetv2/preprocessing/cropping: cropping.py __init__.py ./nnunetv2/preprocessing/normalization: default_normalization_schemes.py __init__.py map_channel_name_to_normalization.py readme.md ./nnunetv2/preprocessing/preprocessors: default_preprocessor.py __init__.py ./nnunetv2/preprocessing/resampling: default_resampling.py __init__.py no_resampling.py resample_torch.py utils.py ./nnunetv2/run: __init__.py load_pretrained_weights.py run_training.py ./nnunetv2/tests: example_data __init__.py integration_tests ./nnunetv2/tests/example_data: example_ct_sm.nii.gz example_ct_sm_T300_output.nii.gz ./nnunetv2/tests/integration_tests: add_lowres_and_cascade.py lsf_commands.sh run_integration_test_bestconfig_inference.py run_nnunet_inference.py cleanup_integration_test.py prepare_integration_tests.sh run_integration_test.sh __init__.py readme.md run_integration_test_trainingOnly_DDP.sh ./nnunetv2/training: data_augmentation dataloading __init__.py logging loss lr_scheduler nnUNetTrainer ./nnunetv2/training/data_augmentation: compute_initial_patch_size.py custom_transforms __init__.py ./nnunetv2/training/data_augmentation/custom_transforms: cascade_transforms.py __init__.py region_based_training.py deep_supervision_donwsampling.py masking.py transforms_for_dummy_2d.py ./nnunetv2/training/dataloading: data_loader.py __init__.py nnunet_dataset.py utils.py ./nnunetv2/training/logging: __init__.py nnunet_logger.py ./nnunetv2/training/loss: compound_losses.py deep_supervision.py dice.py __init__.py robust_ce_loss.py ./nnunetv2/training/lr_scheduler: __init__.py polylr.py warmup.py ./nnunetv2/training/nnUNetTrainer: __init__.py nnUNetTrainer.py primus variants ./nnunetv2/training/nnUNetTrainer/primus: __init__.py primus_trainers.py ./nnunetv2/training/nnUNetTrainer/variants: benchmarking data_augmentation loss network_architecture sampling competitions __init__.py lr_schedule optimizer training_length ./nnunetv2/training/nnUNetTrainer/variants/benchmarking: __init__.py nnUNetTrainerBenchmark_5epochs_noDataLoading.py nnUNetTrainerBenchmark_5epochs.py ./nnunetv2/training/nnUNetTrainer/variants/competitions: aortaseg24.py __init__.py ./nnunetv2/training/nnUNetTrainer/variants/data_augmentation: __init__.py nnUNetTrainerDAOrd0.py nnUNetTrainer_noDummy2DDA.py nnUNetTrainerDA5.py nnUNetTrainerNoDA.py nnUNetTrainerNoMirroring.py ./nnunetv2/training/nnUNetTrainer/variants/loss: __init__.py nnUNetTrainerCELoss.py nnUNetTrainerDiceLoss.py nnUNetTrainerTopkLoss.py ./nnunetv2/training/nnUNetTrainer/variants/lr_schedule: __init__.py nnUNetTrainerCosAnneal.py nnUNetTrainer_warmup.py ./nnunetv2/training/nnUNetTrainer/variants/network_architecture: __init__.py nnUNetTrainerBN.py nnUNetTrainerNoDeepSupervision.py ./nnunetv2/training/nnUNetTrainer/variants/optimizer: __init__.py nnUNetTrainerAdam.py nnUNetTrainerAdan.py ./nnunetv2/training/nnUNetTrainer/variants/sampling: __init__.py nnUNetTrainer_probabilisticOversampling.py ./nnunetv2/training/nnUNetTrainer/variants/training_length: __init__.py nnUNetTrainer_Xepochs_NoMirroring.py nnUNetTrainer_Xepochs.py ./nnunetv2/utilities: collate_outputs.py default_n_proc_DA.py helpers.py network_initialization.py crossval_split.py file_path_utilities.py __init__.py overlay_plots.py dataset_name_id_conversion.py find_class_by_name.py json_export.py plans_handling ddp_allgather.py get_network_from_plans.py label_handling utils.py ./nnunetv2/utilities/label_handling: __init__.py label_handling.py ./nnunetv2/utilities/plans_handling: __init__.py plans_handler.py ./nnunetv2.egg-info: dependency_links.txt entry_points.txt PKG-INFO requires.txt SOURCES.txt top_level.txt ./UNKNOWN.egg-info: dependency_links.txt PKG-INFO SOURCES.txt top_level.txt (nnunet_env) jzuser@vpc87-3:~/Work_dir/Gn/pystudy/nnUNet/nnUNet$
08-15
# Other default tuning values # MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows, when MySQL has been installed using MySQL Installer you # should keep this file in the ProgramData directory of your server # (e.g. C:\ProgramData\MySQL\MySQL Server X.Y). To make sure the server # reads the config file, use the startup option "--defaults-file". # # To run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guidelines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # For advice on how to change settings please see # https://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] # pipe= # socket=MYSQL port=3306 [mysql] no-beep # default-character-set= # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] port = 3306 bind-address = 0.0.0.0 # The next three options are mutually exclusive to SERVER_PORT below. # skip-networking # enable-named-pipe # shared-memory # shared-memory-base-name=MYSQL # The Pipe the MySQL Server will use. # socket=MYSQL # The access control granted to clients on the named pipe created by the MySQL Server. # named-pipe-full-access-group= # The TCP/IP Port the MySQL Server will listen on port=3306 # Path to installation directory. All paths are usually resolved relative to this. # basedir="D:/mysql" # Path to the database root datadir=D:/mysql\Data # The default character set that will be used when a new schema or table is # created and no character set is defined # character-set-server= # The default storage engine that will be used when create new tables when default-storage-engine=INNODB # The current server SQL mode, which can be set dynamically. # Modes affect the SQL syntax MySQL supports and the data validation checks it performs. This # makes it easier to use MySQL in different environments and to use MySQL together with other # database servers. sql-mode="ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION" # General and Slow logging. log-output=FILE general-log=0 general_log_file="WIN-20240617SLP.log" slow-query-log=1 slow_query_log_file="WIN-20240617SLP-slow.log" long_query_time=10 # Error Logging. log-error="WIN-20240617SLP.err" # ***** Group Replication Related ***** # Specifies the base name to use for binary log files. With binary logging # enabled, the server logs all statements that change data to the binary # log, which is used for backup and replication. log-bin="WIN-20240617SLP-bin" # ***** Group Replication Related ***** # Specifies the server ID. For servers that are used in a replication topology, # you must specify a unique server ID for each replication server, in the # range from 1 to 2^32 − 1. "Unique" means that each ID must be different # from every other ID in use by any other source or replica. server-id=1 # Indicates how table and database names are stored on disk and used in MySQL. # Value 0 = Table and database names are stored on disk using the lettercase specified in the CREATE # TABLE or CREATE DATABASE statement. Name comparisons are case-sensitive. You should not # set this variable to 0 if you are running MySQL on a system that has case-insensitive file # names (such as Windows or macOS). If you force this variable to 0 with # --lower-case-table-names=0 on a case-insensitive file system and access MyISAM tablenames # using different lettercases, index corruption may result. # Value 1 = Table names are stored in lowercase on disk and name comparisons are not case-sensitive. # MySQL converts all table names to lowercase on storage and lookup. This behavior also applies # to database names and table aliases. # Value 2 = Table and database names are stored on disk using the lettercase specified in the CREATE TABLE # or CREATE DATABASE statement, but MySQL converts them to lowercase on lookup. Name comparisons # are not case-sensitive. This works only on file systems that are not case-sensitive! InnoDB # table names and view names are stored in lowercase, as for lower_case_table_names=1. lower_case_table_names=1 # This variable is used to limit the effect of data import and export operations, such as # those performed by the LOAD DATA and SELECT ... INTO OUTFILE statements and the # LOAD_FILE() function. These operations are permitted only to users who have the FILE privilege. secure-file-priv="D:/mysql/Uploads" # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=151 # The number of open tables for all threads. Increasing this value increases the number # of file descriptors that mysqld requires. table_open_cache=4000 # Defines the maximum amount of memory that can be occupied by the TempTable # storage engine before it starts storing data on disk. temptable_max_ram=1G # Defines the maximum size of internal in-memory temporary tables created # by the MEMORY storage engine and, as of MySQL 8.0.28, the TempTable storage # engine. If an internal in-memory temporary table exceeds this size, it is # automatically converted to an on-disk internal temporary table. tmp_table_size=128M # The storage engine for in-memory internal temporary tables (see Section 8.4.4, "Internal # Temporary Table Use in MySQL"). Permitted values are TempTable (the default) and MEMORY. internal_tmp_mem_storage_engine=TempTable #*** MyISAM Specific options # The maximum size of the temporary file that MySQL is permitted to use while re-creating a # MyISAM index (during REPAIR TABLE, ALTER TABLE, or LOAD DATA). If the file size would be # larger than this value, the index is created using the key cache instead, which is slower. # The value is given in bytes. myisam_max_sort_file_size=2146435072 # The size of the buffer that is allocated when sorting MyISAM indexes during a REPAIR TABLE # or when creating indexes with CREATE INDEX or ALTER TABLE. myisam_sort_buffer_size=245M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=8M # Each thread that does a sequential scan for a MyISAM table allocates a buffer # of this size (in bytes) for each table it scans. If you do many sequential # scans, you might want to increase this value, which defaults to 131072. The # value of this variable should be a multiple of 4KB. If it is set to a value # that is not a multiple of 4KB, its value is rounded down to the nearest multiple # of 4KB. read_buffer_size=128K # This variable is used for reads from MyISAM tables, and, for any storage engine, # for Multi-Range Read optimization. read_rnd_buffer_size=256K #*** INNODB Specific options *** # innodb_data_home_dir= # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. # skip-innodb # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size in bytes of the buffer that InnoDB uses to write to the log files on # disk. The default value changed from 8MB to 16MB with the introduction of 32KB # and 64KB innodb_page_size values. A large log buffer enables large transactions # to run without the need to write the log to disk before the transactions commit. # Thus, if you have transactions that update, insert, or delete many rows, making # the log buffer larger saves disk I/O. innodb_log_buffer_size=16M # The size in bytes of the buffer pool, the memory area where InnoDB caches table # and index data. The default value is 134217728 bytes (128MB). The maximum value # depends on the CPU architecture; the maximum is 4294967295 (232-1) on 32-bit systems # and 18446744073709551615 (264-1) on 64-bit systems. On 32-bit systems, the CPU # architecture and operating system may impose a lower practical maximum size than the # stated maximum. When the size of the buffer pool is greater than 1GB, setting # innodb_buffer_pool_instances to a value greater than 1 can improve the scalability on # a busy server. innodb_buffer_pool_size=128M # Defines the amount of disk space occupied by redo log files. This variable supersedes the # innodb_log_files_in_group and innodb_log_file_size variables. innodb_redo_log_capacity=100M # Defines the maximum number of threads permitted inside of InnoDB. A value # of 0 (the default) is interpreted as infinite concurrency (no limit). This # variable is intended for performance tuning on high concurrency systems. # InnoDB tries to keep the number of threads inside InnoDB less than or equal to # the innodb_thread_concurrency limit. Once the limit is reached, additional threads # are placed into a "First In, First Out" (FIFO) queue for waiting threads. Threads # waiting for locks are not counted in the number of concurrently executing threads. innodb_thread_concurrency=25 # The increment size (in MB) for extending the size of an auto-extend InnoDB system tablespace file when it becomes full. innodb_autoextend_increment=64 # The number of regions that the InnoDB buffer pool is divided into. # For systems with buffer pools in the multi-gigabyte range, dividing the buffer pool into separate instances can improve concurrency, # by reducing contention as different threads read and write to cached pages. innodb_buffer_pool_instances=8 # Determines the number of threads that can enter InnoDB concurrently. innodb_concurrency_tickets=5000 # Specifies how long in milliseconds (ms) a block inserted into the old sublist must stay there after its first access before # it can be moved to the new sublist. innodb_old_blocks_time=1000 # When this variable is enabled, InnoDB updates statistics during metadata statements. innodb_stats_on_metadata=0 # When innodb_file_per_table is enabled (the default in 5.6.6 and higher), InnoDB stores the data and indexes for each newly created table # in a separate .ibd file, rather than in the system tablespace. innodb_file_per_table=1 # Use the following list of values: 0 for crc32, 1 for strict_crc32, 2 for innodb, 3 for strict_innodb, 4 for none, 5 for strict_none. innodb_checksum_algorithm=0 # If this is set to a nonzero value, all tables are closed every flush_time seconds to free up resources and # synchronize unflushed data to disk. # This option is best used only on systems with minimal resources. flush_time=0 # The minimum size of the buffer that is used for plain index scans, range index scans, and joins that do not use # indexes and thus perform full table scans. join_buffer_size=256K # The maximum size of one packet or any generated or intermediate string, or any parameter sent by the # mysql_stmt_send_long_data() C API function. max_allowed_packet=64M # If more than this many successive connection requests from a host are interrupted without a successful connection, # the server blocks that host from performing further connections. max_connect_errors=100 # The number of file descriptors available to mysqld from the operating system # Try increasing the value of this option if mysqld gives the error "Too many open files". open_files_limit=8161 # If you see many sort_merge_passes per second in SHOW GLOBAL STATUS output, you can consider increasing the # sort_buffer_size value to speed up ORDER BY or GROUP BY operations that cannot be improved with query optimization # or improved indexing. sort_buffer_size=256K # Specify the maximum size of a row-based binary log event, in bytes. # Rows are grouped into events smaller than this size if possible. The value should be a multiple of 256. binlog_row_event_max_size=8K # If the value of this variable is greater than 0, a replica synchronizes its master.info file to disk. # (using fdatasync()) after every sync_source_info events. sync_source_info=10000 # If the value of this variable is greater than 0, the MySQL server synchronizes its relay log to disk. # (using fdatasync()) after every sync_relay_log writes to the relay log. sync_relay_log=10000 # Load mysql plugins at start."plugin_x ; plugin_y". # plugin_load # The TCP/IP Port the MySQL Server X Protocol will listen on. 这就是配置文件内容
最新发布
10-03
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值