TrainingArguments所有参数原文+翻译 -2024-08-01

可以说,整个任务中的调参“源泉”就是这个TrainingArguments类,这个类是使用dataclass装饰器进行包装,然后再利用HfArgumentParser进行参数的解析,最后获得了对应的内容。
这个包可以调的参数有很多,有用的也有很多,所以还是有必要一起看一下的。

一开始看这个包,直觉上,其主要功能就是帮助实现一些已有的参数配置,可以快速的调用这个类来实现参数的使用,但是具体使用与否,还是需要我们在整个编程的过程中来设计的。用得上哪个,我就再命令里加上,用不上的就可以直接略过。总体感觉上有点像transformers包的作者在给我们设置了一个参数的框架。
下面来看一下具体的细节:
源码注释:
调用方法:

from transformers import TrainingArguments

版本:

transformers        4.43.3 

注释内容:


@dataclass
class TrainingArguments:
    """
    TrainingArguments is the subset of the arguments we use in our example scripts **which relate to the training loop
    itself**.

    Using [`HfArgumentParser`] we can turn this class into
    [argparse](https://docs.python.org/3/library/argparse#module-argparse) arguments that can be specified on the
    command line.

    Parameters:
        output_dir (`str`):
            The output directory where the model predictions and checkpoints will be written.
        overwrite_output_dir (`bool`, *optional*, defaults to `False`):
            If `True`, overwrite the content of the output directory. Use this to continue training if `output_dir`
            points to a checkpoint directory.
        do_train (`bool`, *optional*, defaults to `False`):
            Whether to run training or not. This argument is not directly used by [`Trainer`], it's intended to be used
            by your training/evaluation scripts instead. See the [example
            scripts](https://github.com/huggingface/transformers/tree/main/examples) for more details.
        do_eval (`bool`, *optional*):
            Whether to run evaluation on the validation set or not. Will be set to `True` if `eval_strategy` is
            different from `"no"`. This argument is not directly used by [`Trainer`], it's intended to be used by your
            training/evaluation scripts instead. See the [example
            scripts](https://github.com/huggingface/transformers/tree/main/examples) for more details.
        do_predict (`bool`, *optional*, defaults to `False`):
            Whether to run predictions on the test set or not. This argument is not directly used by [`Trainer`], it's
            intended to be used by your training/evaluation scripts instead. See the [example
            scripts](https://github.com/huggingface/transformers/tree/main/examples) for more details.
        eval_strategy (`str` or [`~trainer_utils.IntervalStrategy`], *optional*, defaults to `"no"`):
            The evaluation strategy to adopt during training. Possible values are:

                - `"no"`: No evaluation is done during training.
                - `"steps"`: Evaluation is done (and logged) every `eval_steps`.
                - `"epoch"`: Evaluation is done at the end of each epoch.

        prediction_loss_only (`bool`, *optional*, defaults to `False`):
            When performing evaluation and generating predictions, only returns the loss.
        per_device_train_batch_size (`int`, *optional*, defaults to 8):
            The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for training.
        per_device_eval_batch_size (`int`, *optional*, defaults to 8):
            The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for evaluation.
        gradient_accumulation_steps (`int`, *optional*, defaults to 1):
            Number of updates steps to accumulate the gradients for, before performing a backward/update pass.

            <Tip warning={true}>

            When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging,
            evaluation, save will be conducted every `gradient_accumulation_steps * xxx_step` training examples.

            </Tip>

        eval_accumulation_steps (`int`, *optional*):
            Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If
            left unset, the whole predictions are accumulated on GPU/NPU/TPU before being moved to the CPU (faster but
            requires more memory).
        eval_delay (`float`, *optional*):
            Number of epochs or steps to wait for before the first evaluation can be performed, depending on the
            eval_strategy.
        torch_empty_cache_steps (`int`, *optional*):
            Number of steps to wait before calling `torch.<device>.empty_cache()`. If left unset or set to None, cache will not be emptied.

            <Tip>

            This can help avoid CUDA out-of-memory errors by lowering peak VRAM usage at a cost of about [10% slower performance](https://github.com/huggingface/transformers/issues/31372).

            </Tip>

        learning_rate (`float`, *optional*, defaults to 5e-5):
            The initial learning rate for [`AdamW`] optimizer.
        weight_decay (`float`, *optional*, defaults to 0):
            The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in [`AdamW`]
            optimizer.
        adam_beta1 (`float`, *optional*, defaults to 0.9):
            The beta1 hyperparameter for the [`AdamW`] optimizer.
        adam_beta2 (`float`, *optional*, defaults to 0.999):
            The beta2 hyperparameter for the [`AdamW`] optimizer.
        adam_epsilon (`float`, *optional*, defaults to 1e-8):
            The epsilon hyperparameter for the [`AdamW`] optimizer.
        max_grad_norm (`float`, *optional*, defaults to 1.0):
            Maximum gradient norm (for gradient clipping).
        num_train_epochs(`float`, *optional*, defaults to 3.0):
            Total number of training epochs to perform (if not an integer, will perform the decimal part percents of
            the last epoch before stopping training).
        max_steps (`int`, *optional*, defaults to -1):
            If set to a positive number, the total number of training steps to perform. Overrides `num_train_epochs`.
            For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until
            `max_steps` is reached.
        lr_scheduler_type (`str` or [`SchedulerType`], *optional*, defaults to `"linear"`):
            The scheduler type to use. See the documentation of [`SchedulerType`] for all possible values.
        lr_scheduler_kwargs ('dict', *optional*, defaults to {}):
            The extra arguments for the lr_scheduler. See the documentation of each scheduler for possible values.
        warmup_ratio (`float`, *optional*, defaults to 0.0):
            Ratio of total training steps used for a linear warmup from 0 to `learning_rate`.
        warmup_steps (`int`, *optional*, defaults to 0):
            Number of steps used for a linear warmup from 0 to `learning_rate`. Overrides any effect of `warmup_ratio`.
        log_level (`str`, *optional*, defaults to `passive`):
            Logger log level to use on the main process. Possible choices are the log levels as strings: 'debug',
            'info', 'warning', 'error' and 'critical', plus a 'passive' level which doesn't set anything and keeps the
            current log level for the Transformers library (which will be `"warning"` by default).
        log_level_replica (`str`, *optional*, defaults to `"warning"`):
            Logger log level to use on replicas. Same choices as `log_level`"
        log_on_each_node (`bool`, *optional*, defaults to `True`):
            In multinode distributed training, whether to log using `log_level` once per node, or only on the main
            node.
        logging_dir (`str`, *optional*):
            [TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to
            *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***.
        logging_strategy (`str` or [`~trainer_utils.IntervalStrategy`], *optional*, defaults to `"steps"`):
            The logging strategy to adopt during training. Possible values are:

                - `"no"`: No logging is done during training.
                - `"epoch"`: Logging is done at the end of each epoch.
                - `"steps"`: Logging is done every `logging_steps`.

        logging_first_step (`bool`, *optional*, defaults to `False`):
            Whether to log the first `global_step` or not.
        logging_steps (`int` or `float`, *optional*, defaults to 500):
            Number of update steps between two logs if `logging_strategy="steps"`. Should be an integer or a float in
            range `[0,1)`. If smaller than 1, will be interpreted as ratio of total training steps.
        logging_nan_inf_filter (`bool`, *optional*, defaults to `True`):
            Whether to filter `nan` and `inf` losses for logging. If set to `True` the loss of every step that is `nan`
            or `inf` is filtered and the average loss of the current logging window is taken instead.

            <Tip>

            `logging_nan_inf_filter` only influences the logging of loss values, it does not change the behavior the
            gradient is computed or applied to the model.

            </Tip>

        save_strategy (`str` or [`~trainer_utils.IntervalStrategy`], *optional*, defaults to `"steps"`):
            The checkpoint save strategy to adopt during training. Possible values are:

                - `"no"`: No save is done during training.
                - `"epoch"`: Save is done at the end of each epoch.
                - `"steps"`: Save is done every `save_steps`.

                If `"epoch"` or `"steps"` is chosen, saving will also be performed at the
                very end of training, always.
        save_steps (`int` or `float`, *optional*, defaults to 500):
            Number of updates steps before two checkpoint saves if `save_strategy="steps"`. Should be an integer or a
            float in range `[0,1)`. If smaller than 1, will be interpreted as ratio of total training steps.
        save_total_limit (`int`, *optional*):
            If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in
            `output_dir`. When `load_best_model_at_end` is enabled, the "best" checkpoint according to
            `metric_for_best_model` will always be retained in addition to the most recent ones. For example, for
            `save_total_limit=5` and `load_best_model_at_end`, the four last checkpoints will always be retained
            alongside the best model. When `save_total_limit=1` and `load_best_model_at_end`, it is possible that two
            checkpoints are saved: the last one and the best one (if they are different).
        save_safetensors (`bool`, *optional*, defaults to `True`):
            Use [safetensors](https://huggingface.co/docs/safetensors) saving and loading for state dicts instead of
            default `torch.load` and `torch.save`.
        save_on_each_node (`bool`, *optional*, defaults to `False`):
            When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on
            the main one.

            This should not be activated when the different nodes use the same storage as the files will be saved with
            the same names for each node.
        save_only_model (`bool`, *optional*, defaults to `False`):
            When checkpointing, whether to only save the model, or also the optimizer, scheduler & rng state.
            Note that when this is true, you won't be able to resume training from checkpoint.
            This enables you to save storage by not storing the optimizer, scheduler & rng state.
            You can only load the model using `from_pretrained` with this option set to `True`.
        restore_callback_states_from_checkpoint (`bool`, *optional*, defaults to `False`):
            Whether to restore the callback states from the checkpoint. If `True`, will override
            callbacks passed to the `Trainer` if they exist in the checkpoint."
        use_cpu (`bool`, *optional*, defaults to `False`):
            Whether or not to use cpu. If set to False, we will use cuda or mps device if available.
        seed (`int`, *optional*, defaults to 42):
            Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the
            [`~Trainer.model_init`] function to instantiate the model if it has some randomly initialized parameters.
        data_seed (`int`, *optional*):
            Random seed to be used with data samplers. If not set, random generators for data sampling will use the
            same seed as `seed`. This can be used to ensure reproducibility of data sampling, independent of the model
            seed.
        jit_mode_eval (`bool`, *optional*, defaults to `False`):
            Whether or not to use PyTorch jit trace for inference.
        use_ipex (`bool`, *optional*, defaults to `False`):
            Use Intel extension for PyTorch when it is available. [IPEX
            installation](https://github.com/intel/intel-extension-for-pytorch).
        bf16 (`bool`, *optional*, defaults to `False`):
            Whether to use bf16 16-bit (mixed) precision training instead of 32-bit training. Requires Ampere or higher
            NVIDIA architecture or using CPU (use_cpu) or Ascend NPU. This is an experimental API and it may change.
        fp16 (`bool`, *optional*, defaults to `False`):
            Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training.
        fp16_opt_level (`str`, *optional*, defaults to 'O1'):
            For `fp16` training, Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']. See details on
            the [Apex documentation](https://nvidia.github.io/apex/amp).
        fp16_backend (`str`, *optional*, defaults to `"auto"`):
            This argument is deprecated. Use `half_precision_backend` instead.
        half_precision_backend (`str`, *optional*, defaults to `"auto"`):
            The backend to use for mixed precision training. Must be one of `"auto", "apex", "cpu_amp"`. `"auto"` will
            use CPU/CUDA AMP or APEX depending on the PyTorch version detected, while the other choices will force the
            requested backend.
        bf16_full_eval (`bool`, *optional*, defaults to `False`):
            Whether to use full bfloat16 evaluation instead of 32-bit. This will be faster and save memory but can harm
            metric values. This is an experimental API and it may change.
        fp16_full_eval (`bool`, *optional*, defaults to `False`):
            Whether to use full float16 evaluation instead of 32-bit. This will be faster and save memory but can harm
            metric values.
        tf32 (`bool`, *optional*):
            Whether to enable the TF32 mode, available in Ampere and newer GPU architectures. The default value depends
            on PyTorch's version default of `torch.backends.cuda.matmul.allow_tf32`. For more details please refer to
            the [TF32](https://huggingface.co/docs/transformers/performance#tf32) documentation. This is an
            experimental API and it may change.
        local_rank (`int`, *optional*, defaults to -1):
            Rank of the process during distributed training.
        ddp_backend (`str`, *optional*):
            The backend to use for distributed training. Must be one of `"nccl"`, `"mpi"`, `"ccl"`, `"gloo"`, `"hccl"`.
        tpu_num_cores (`int`, *optional*):
            When training on TPU, the number of TPU cores (automatically passed by launcher script).
        dataloader_drop_last (`bool`, *optional*, defaults to `False`):
            Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size)
            or not.
        eval_steps (`int` or `float`, *optional*):
            Number of update steps between two evaluations if `eval_strategy="steps"`. Will default to the same
            value as `logging_steps` if not set. Should be an integer or a float in range `[0,1)`. If smaller than 1,
            will be interpreted as ratio of total training steps.
        dataloader_num_workers (`int`, *optional*, defaults to 0):
            Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the
            main process.
        past_index (`int`, *optional*, defaults to -1):
            Some models like [TransformerXL](../model_doc/transformerxl) or [XLNet](../model_doc/xlnet) can make use of
            the past hidden states for their predictions. If this argument is set to a positive int, the `Trainer` will
            use the corresponding output (usually index 2) as the past state and feed it to the model at the next
            training step under the keyword argument `mems`.
        run_name (`str`, *optional*, defaults to `output_dir`):
            A descriptor for the run. Typically used for [wandb](https://www.wandb.com/),
            [mlflow](https://www.mlflow.org/) and [comet](https://www.comet.com/site) logging. If not specified, will
            be the same as `output_dir`.
        disable_tqdm (`bool`, *optional*):
            Whether or not to disable the tqdm progress bars and table of metrics produced by
            [`~notebook.NotebookTrainingTracker`] in Jupyter Notebooks. Will default to `True` if the logging level is
            set to warn or lower (default), `False` otherwise.
        remove_unused_columns (`bool`, *optional*, defaults to `True`):
            Whether or not to automatically remove the columns unused by the model forward method.
        label_names (`List[str]`, *optional*):
            The list of keys in your dictionary of inputs that correspond to the labels.

            Will eventually default to the list of argument names accepted by the model that contain the word "label",
            except if the model used is one of the `XxxForQuestionAnswering` in which case it will also include the
            `["start_positions", "end_positions"]` keys.
        load_best_model_at_end (`bool`, *optional*, defaults to `False`):
            Whether or not to load the best model found during training at the end of training. When this option is
            enabled, the best checkpoint will always be saved. See
            [`save_total_limit`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.save_total_limit)
            for more.

            <Tip>

            When set to `True`, the parameters `save_strategy` needs to be the same as `eval_strategy`, and in
            the case it is "steps", `save_steps` must be a round multiple of `eval_steps`.

            </Tip>

        metric_for_best_model (`str`, *optional*):
            Use in conjunction with `load_best_model_at_end` to specify the metric to use to compare two different
            models. Must be the name of a metric returned by the evaluation with or without the prefix `"eval_"`. Will
            default to `"loss"` if unspecified and `load_best_model_at_end=True` (to use the evaluation loss).

            If you set this value, `greater_is_better` will default to `True`. Don't forget to set it to `False` if
            your metric is better when lower.
        greater_is_better (`bool`, *optional*):
            Use in conjunction with `load_best_model_at_end` and `metric_for_best_model` to specify if better models
            should have a greater metric or not. Will default to:

            - `True` if `metric_for_best_model` is set to a value that doesn't end in `"loss"`.
            - `False` if `metric_for_best_model` is not set, or set to a value that ends in `"loss"`.
        ignore_data_skip (`bool`, *optional*, defaults to `False`):
            When resuming training, whether or not to skip the epochs and batches to get the data loading at the same
            stage as in the previous training. If set to `True`, the training will begin faster (as that skipping step
            can take a long time) but will not yield the same results as the interrupted training would have.
        fsdp (`bool`, `str` or list of [`~trainer_utils.FSDPOption`], *optional*, defaults to `''`):
            Use PyTorch Distributed Parallel Training (in distributed training only).

            A list of options along the following:

            - `"full_shard"`: Shard parameters, gradients and optimizer states.
            - `"shard_grad_op"`: Shard optimizer states and gradients.
            - `"hybrid_shard"`: Apply `FULL_SHARD` within a node, and replicate parameters across nodes.
            - `"hybrid_shard_zero2"`: Apply `SHARD_GRAD_OP` within a node, and replicate parameters across nodes.
            - `"offload"`: Offload parameters and gradients to CPUs (only compatible with `"full_shard"` and
              `"shard_grad_op"`).
            - `"auto_wrap"`: Automatically recursively wrap layers with FSDP using `default_auto_wrap_policy`.
        fsdp_config (`str` or `dict`, *optional*):
            Config to be used with fsdp (Pytorch Distributed Parallel Training). The value is either a location of
            fsdp json config file (e.g., `fsdp_config.json`) or an already loaded json file as `dict`.

            A List of config and its options:
                - min_num_params (`int`, *optional*, defaults to `0`):
                    FSDP's minimum number of parameters for Default Auto Wrapping. (useful only when `fsdp` field is
                    passed).
                - transformer_layer_cls_to_wrap (`List[str]`, *optional*):
                    List of transformer layer class names (case-sensitive) to wrap, e.g, `BertLayer`, `GPTJBlock`,
                    `T5Block` .... (useful only when `fsdp` flag is passed).
                - backward_prefetch (`str`, *optional*)
                    FSDP's backward prefetch mode. Controls when to prefetch next set of parameters (useful only when
                    `fsdp` field is passed).

                    A list of options along the following:

                    - `"backward_pre"` : Prefetches the next set of parameters before the current set of parameter's
                      gradient
                        computation.
                    - `"backward_post"` : This prefetches the next set of parameters after the current set of
                      parameter’s
                        gradient computation.
                - forward_prefetch (`bool`, *optional*, defaults to `False`)
                    FSDP's forward prefetch mode (useful only when `fsdp` field is passed).
                     If `"True"`, then FSDP explicitly prefetches the next upcoming all-gather while executing in the
                     forward pass.
                - limit_all_gathers (`bool`, *optional*, defaults to `False`)
                    FSDP's limit_all_gathers (useful only when `fsdp` field is passed).
                     If `"True"`, FSDP explicitly synchronizes the CPU thread to prevent too many in-flight
                     all-gathers.
                - use_orig_params (`bool`, *optional*, defaults to `True`)
                    If `"True"`, allows non-uniform `requires_grad` during init, which means support for interspersed
                    frozen and trainable paramteres. Useful in cases such as parameter-efficient fine-tuning. Please
                    refer this
                    [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019
                - sync_module_states (`bool`, *optional*, defaults to `True`)
                    If `"True"`, each individually wrapped FSDP unit will broadcast module parameters from rank 0 to
                    ensure they are the same across all ranks after initialization
                - cpu_ram_efficient_loading (`bool`, *optional*, defaults to `False`)
                    If `"True"`, only the first process loads the pretrained model checkpoint while all other processes
                    have empty weights.  When this setting as `"True"`, `sync_module_states` also must to be `"True"`,
                    otherwise all the processes except the main process would have random weights leading to unexpected
                    behaviour during training.
                - activation_checkpointing (`bool`, *optional*, defaults to `False`):
                    If `"True"`, activation checkpointing is a technique to reduce memory usage by clearing activations of
                    certain layers and recomputing them during a backward pass. Effectively, this trades extra
                    computation time for reduced memory usage.
                - xla (`bool`, *optional*, defaults to `False`):
                    Whether to use PyTorch/XLA Fully Sharded Data Parallel Training. This is an experimental feature
                    and its API may evolve in the future.
                - xla_fsdp_settings (`dict`, *optional*)
                    The value is a dictionary which stores the XLA FSDP wrapping parameters.

                    For a complete list of options, please see [here](
                    https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_fully_sharded_data_parallel.py).
                - xla_fsdp_grad_ckpt (`bool`, *optional*, defaults to `False`):
                    Will use gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be
                    used when the xla flag is set to true, and an auto wrapping policy is specified through
                    fsdp_min_num_params or fsdp_transformer_layer_cls_to_wrap.

        deepspeed (`str` or `dict`, *optional*):
            Use [Deepspeed](https://github.com/microsoft/deepspeed). This is an experimental feature and its API may
            evolve in the future. The value is either the location of DeepSpeed json config file (e.g.,
            `ds_config.json`) or an already loaded json file as a `dict`"

            <Tip warning={true}>
                If enabling any Zero-init, make sure that your model is not initialized until
                *after* initializing the `TrainingArguments`, else it will not be applied.
            </Tip>

        accelerator_config (`str`, `dict`, or `AcceleratorConfig`, *optional*):
            Config to be used with the internal `Accelerator` implementation. The value is either a location of
            accelerator json config file (e.g., `accelerator_config.json`), an already loaded json file as `dict`,
            or an instance of [`~trainer_pt_utils.AcceleratorConfig`].

            A list of config and its options:
                - split_batches (`bool`, *optional*, defaults to `False`):
                    Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If
                    `True` the actual batch size used will be the same on any kind of distributed processes, but it must be a
                    round multiple of the `num_processes` you are using. If `False`, actual batch size used will be the one set
                    in your script multiplied by the number of processes.
                - dispatch_batches (`bool`, *optional*):
                    If set to `True`, the dataloader prepared by the Accelerator is only iterated through on the main process
                    and then the batches are split and broadcast to each process. Will default to `True` for `DataLoader` whose
                    underlying dataset is an `IterableDataset`, `False` otherwise.
                - even_batches (`bool`, *optional*, defaults to `True`):
                    If set to `True`, in cases where the total batch size across all processes does not exactly divide the
                    dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among
                    all workers.
                - use_seedable_sampler (`bool`, *optional*, defaults to `True`):
                    Whether or not use a fully seedable random sampler ([`accelerate.data_loader.SeedableRandomSampler`]). Ensures
                    training results are fully reproducable using a different sampling technique. While seed-to-seed results
                    may differ, on average the differences are neglible when using multiple different seeds to compare. Should
                    also be ran with [`~utils.set_seed`] for the best results.
                - use_configured_state (`bool`, *optional*, defaults to `False`):
                    Whether or not to use a pre-configured `AcceleratorState` or `PartialState` defined before calling `TrainingArguments`.
                    If `True`, an `Accelerator` or `PartialState` must be initialized. Note that by doing so, this could lead to issues
                    with hyperparameter tuning.

        label_smoothing_factor (`float`, *optional*, defaults to 0.0):
            The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded
            labels are changed from 0s and 1s to `label_smoothing_factor/num_labels` and `1 - label_smoothing_factor +
            label_smoothing_factor/num_labels` respectively.
        debug (`str` or list of [`~debug_utils.DebugOption`], *optional*, defaults to `""`):
            Enable one or more debug features. This is an experimental feature.

            Possible options are:

            - `"underflow_overflow"`: detects overflow in model's input/outputs and reports the last frames that led to
              the event
            - `"tpu_metrics_debug"`: print debug metrics on TPU

            The options should be separated by whitespaces.
        optim (`str` or [`training_args.OptimizerNames`], *optional*, defaults to `"adamw_torch"`):
            The optimizer to use: adamw_hf, adamw_torch, adamw_torch_fused, adamw_apex_fused, adamw_anyprecision or
            adafactor.
        optim_args (`str`, *optional*):
            Optional arguments that are supplied to AnyPrecisionAdamW.
        group_by_length (`bool`, *optional*, defaults to `False`):
            Whether or not to group together samples of roughly the same length in the training dataset (to minimize
            padding applied and be more efficient). Only useful if applying dynamic padding.
        length_column_name (`str`, *optional*, defaults to `"length"`):
            Column name for precomputed lengths. If the column exists, grouping by length will use these values rather
            than computing them on train startup. Ignored unless `group_by_length` is `True` and the dataset is an
            instance of `Dataset`.
        report_to (`str` or `List[str]`, *optional*, defaults to `"all"`):
            The list of integrations to report the results and logs to. Supported platforms are `"azure_ml"`,
            `"clearml"`, `"codecarbon"`, `"comet_ml"`, `"dagshub"`, `"dvclive"`, `"flyte"`, `"mlflow"`, `"neptune"`,
            `"tensorboard"`, and `"wandb"`. Use `"all"` to report to all integrations installed, `"none"` for no
            integrations.
        ddp_find_unused_parameters (`bool`, *optional*):
            When using distributed training, the value of the flag `find_unused_parameters` passed to
            `DistributedDataParallel`. Will default to `False` if gradient checkpointing is used, `True` otherwise.
        ddp_bucket_cap_mb (`int`, *optional*):
            When using distributed training, the value of the flag `bucket_cap_mb` passed to `DistributedDataParallel`.
        ddp_broadcast_buffers (`bool`, *optional*):
            When using distributed training, the value of the flag `broadcast_buffers` passed to
            `DistributedDataParallel`. Will default to `False` if gradient checkpointing is used, `True` otherwise.
        dataloader_pin_memory (`bool`, *optional*, defaults to `True`):
            Whether you want to pin memory in data loaders or not. Will default to `True`.
        dataloader_persistent_workers (`bool`, *optional*, defaults to `False`):
            If True, the data loader will not shut down the worker processes after a dataset has been consumed once.
            This allows to maintain the workers Dataset instances alive. Can potentially speed up training, but will
            increase RAM usage. Will default to `False`.
        dataloader_prefetch_factor (`int`, *optional*):
            Number of batches loaded in advance by each worker.
            2 means there will be a total of 2 * num_workers batches prefetched across all workers.
        skip_memory_metrics (`bool`, *optional*, defaults to `True`):
            Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows
            down the training and evaluation speed.
        push_to_hub (`bool`, *optional*, defaults to `False`):
            Whether or not to push the model to the Hub every time the model is saved. If this is activated,
            `output_dir` will begin a git directory synced with the repo (determined by `hub_model_id`) and the content
            will be pushed each time a save is triggered (depending on your `save_strategy`). Calling
            [`~Trainer.save_model`] will also trigger a push.

            <Tip warning={true}>

            If `output_dir` exists, it needs to be a local clone of the repository to which the [`Trainer`] will be
            pushed.

            </Tip>

        resume_from_checkpoint (`str`, *optional*):
            The path to a folder with a valid checkpoint for your model. This argument is not directly used by
            [`Trainer`], it's intended to be used by your training/evaluation scripts instead. See the [example
            scripts](https://github.com/huggingface/transformers/tree/main/examples) for more details.
        hub_model_id (`str`, *optional*):
            The name of the repository to keep in sync with the local *output_dir*. It can be a simple model ID in
            which case the model will be pushed in your namespace. Otherwise it should be the whole repository name,
            for instance `"user_name/model"`, which allows you to push to an organization you are a member of with
            `"organization_name/model"`. Will default to `user_name/output_dir_name` with *output_dir_name* being the
            name of `output_dir`.

            Will default to the name of `output_dir`.
        hub_strategy (`str` or [`~trainer_utils.HubStrategy`], *optional*, defaults to `"every_save"`):
            Defines the scope of what is pushed to the Hub and when. Possible values are:

            - `"end"`: push the model, its configuration, the tokenizer (if passed along to the [`Trainer`]) and a
              draft of a model card when the [`~Trainer.save_model`] method is called.
            - `"every_save"`: push the model, its configuration, the tokenizer (if passed along to the [`Trainer`]) and
              a draft of a model card each time there is a model save. The pushes are asynchronous to not block
              training, and in case the save are very frequent, a new push is only attempted if the previous one is
              finished. A last push is made with the final model at the end of training.
            - `"checkpoint"`: like `"every_save"` but the latest checkpoint is also pushed in a subfolder named
              last-checkpoint, allowing you to resume training easily with
              `trainer.train(resume_from_checkpoint="last-checkpoint")`.
            - `"all_checkpoints"`: like `"checkpoint"` but all checkpoints are pushed like they appear in the output
              folder (so you will get one checkpoint folder per folder in your final repository)

        hub_token (`str`, *optional*):
            The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with
            `huggingface-cli login`.
        hub_private_repo (`bool`, *optional*, defaults to `False`):
            If True, the Hub repo will be set to private.
        hub_always_push (`bool`, *optional*, defaults to `False`):
            Unless this is `True`, the `Trainer` will skip pushing a checkpoint when the previous push is not finished.
        gradient_checkpointing (`bool`, *optional*, defaults to `False`):
            If True, use gradient checkpointing to save memory at the expense of slower backward pass.
        gradient_checkpointing_kwargs (`dict`, *optional*, defaults to `None`):
            Key word arguments to be passed to the `gradient_checkpointing_enable` method.
        include_inputs_for_metrics (`bool`, *optional*, defaults to `False`):
            Whether or not the inputs will be passed to the `compute_metrics` function. This is intended for metrics
            that need inputs, predictions and references for scoring calculation in Metric class.
        eval_do_concat_batches (`bool`, *optional*, defaults to `True`):
            Whether to recursively concat inputs/losses/labels/predictions across batches. If `False`,
            will instead store them as lists, with each batch kept separate.
        auto_find_batch_size (`bool`, *optional*, defaults to `False`)
            Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding
            CUDA Out-of-Memory errors. Requires accelerate to be installed (`pip install accelerate`)
        full_determinism (`bool`, *optional*, defaults to `False`)
            If `True`, [`enable_full_determinism`] is called instead of [`set_seed`] to ensure reproducible results in
            distributed training. Important: this will negatively impact the performance, so only use it for debugging.
        torchdynamo (`str`, *optional*):
            If set, the backend compiler for TorchDynamo. Possible choices are `"eager"`, `"aot_eager"`, `"inductor"`,
            `"nvfuser"`, `"aot_nvfuser"`, `"aot_cudagraphs"`, `"ofi"`, `"fx2trt"`, `"onnxrt"` and `"ipex"`.
        ray_scope (`str`, *optional*, defaults to `"last"`):
            The scope to use when doing hyperparameter search with Ray. By default, `"last"` will be used. Ray will
            then use the last checkpoint of all trials, compare those, and select the best one. However, other options
            are also available. See the [Ray documentation](
            https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) for
            more options.
        ddp_timeout (`int`, *optional*, defaults to 1800):
            The timeout for `torch.distributed.init_process_group` calls, used to avoid GPU socket timeouts when
            performing slow operations in distributed runnings. Please refer the [PyTorch documentation]
            (https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) for more
            information.
        use_mps_device (`bool`, *optional*, defaults to `False`):
            This argument is deprecated.`mps` device will be used if it is available similar to `cuda` device.
        torch_compile (`bool`, *optional*, defaults to `False`):
            Whether or not to compile the model using PyTorch 2.0
            [`torch.compile`](https://pytorch.org/get-started/pytorch-2.0/).

            This will use the best defaults for the [`torch.compile`
            API](https://pytorch.org/docs/stable/generated/torch.compile.html?highlight=torch+compile#torch.compile).
            You can customize the defaults with the argument `torch_compile_backend` and `torch_compile_mode` but we
            don't guarantee any of them will work as the support is progressively rolled in in PyTorch.

            This flag and the whole compile API is experimental and subject to change in future releases.
        torch_compile_backend (`str`, *optional*):
            The backend to use in `torch.compile`. If set to any value, `torch_compile` will be set to `True`.

            Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions.

            This flag is experimental and subject to change in future releases.
        torch_compile_mode (`str`, *optional*):
            The mode to use in `torch.compile`. If set to any value, `torch_compile` will be set to `True`.

            Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions.

            This flag is experimental and subject to change in future releases.
        split_batches (`bool`, *optional*):
            Whether or not the accelerator should split the batches yielded by the dataloaders across the devices
            during distributed training. If

            set to `True`, the actual batch size used will be the same on any kind of distributed processes, but it
            must be a

            round multiple of the number of processes you are using (such as GPUs).
        include_tokens_per_second (`bool`, *optional*):
            Whether or not to compute the number of tokens per second per device for training speed metrics.

            This will iterate over the entire training dataloader once beforehand,

            and will slow down the entire process.

        include_num_input_tokens_seen (`bool`, *optional*):
            Whether or not to track the number of input tokens seen throughout training.

            May be slower in distributed training as gather operations must be called.

        neftune_noise_alpha (`Optional[float]`):
            If not `None`, this will activate NEFTune noise embeddings. This can drastically improve model performance
            for instruction fine-tuning. Check out the [original paper](https://arxiv.org/abs/2310.05914) and the
            [original code](https://github.com/neelsjain/NEFTune). Support transformers `PreTrainedModel` and also
            `PeftModel` from peft.
        optim_target_modules (`Union[str, List[str]]`, *optional*):
            The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm
            https://arxiv.org/abs/2403.03507
            See: https://github.com/jiaweizzhao/GaLore for more details. You need to make sure to pass a valid GaloRe
            optimizer, e.g. one of: "galore_adamw", "galore_adamw_8bit", "galore_adafactor" and make sure that the target modules are `nn.Linear` modules
            only.

        batch_eval_metrics (`Optional[bool]`, defaults to `False`):
            If set to `True`, evaluation will call compute_metrics at the end of each batch to accumulate statistics
            rather than saving all eval logits in memory. When set to `True`, you must pass a compute_metrics function
            that takes a boolean argument `compute_result`, which when passed `True`, will trigger the final global
            summary statistics from the batch-level summary statistics you've accumulated over the evaluation set.

        eval_on_start (`bool`, *optional*, defaults to `False`):
            Whether to perform a evaluation step (sanity check) before the training to ensure the validation steps works correctly.

        eval_use_gather_object (`bool`, *optional*, defaults to `False`):
            Whether to run recursively gather object in a nested list/tuple/dictionary of objects from all devices. This should only be enabled if users are not just returning tensors, and this is actively discouraged by PyTorch.
    """

注释翻译:

    TrainingArguments是我们在示例脚本**中使用的与训练循环相关的参数的子集
本身**。

使用[`HfArgumentParser`],我们可以将这个类转换为
[argparse](https://docs.python.org/3/library/argparse#module-argparse)参数,可以在
命令行。

参数:
output_dir(`str`):
将写入模型预测和检查点的输出目录。
overwrite_output_dir(`bool`,*可选*,默认为`False `):
如果为True,则覆盖输出目录的内容。如果`output_dir,则使用此选项继续训练`
指向检查点目录。
do_train(`bool `,*可选*,默认为`False `):
是否进行训练。这个论点不是由[“培训师”]直接使用的,它的目的是
通过你的培训/评估脚本。参见[示例
脚本](https://github.com/huggingface/transformers/tree/main/examples)了解更多详情。
do_eval(`bool`,*可选*):
是否对验证集运行评估。如果`eval_strategy`为
与“否”不同。这个论点不是由[“培训师”]直接使用的,它旨在供您的
取而代之的是培训/评估脚本。参见[示例
脚本](https://github.com/huggingface/transformers/tree/main/examples)了解更多详情。
do_prnate(`bool `,*optional*,默认为`False `):
是否在测试集上运行预测。这个论点并没有被[‘Trainer’]直接使用,它是
旨在供您的培训/评估脚本使用。参见[示例
脚本](https://github.com/huggingface/transformers/tree/main/examples)了解更多详情。
eval_strategy(`str`或[`~trainer_utils.IntervalStrategy`]*可选*,默认为`否'):
培训期间采用的评估策略。可能的值有:

-“否”:培训期间不进行评估。
-“steps”:每次`eval_steps`都会进行评估(并记录)。
-“纪元”:在每个纪元结束时进行评估。

prediction_loss_only(`bool`,*可选*,默认为`False’):
在执行评估和生成预测时,只返回损失。
per_device_train_batch_size(`int`,*可选*,默认为8):
用于训练的每个GPU/XPU/TPU/MPS/NPU内核/CPU的批量大小。
per_device_eval_batch_size(`int`,*可选*,默认为8):
用于评估的每个GPU/XPU/TPU/MPS/NPU内核/CPU的批量大小。
gradient_acumulation_steps(`int`,*可选*,默认为1):
在执行向后/更新过程之前,累积梯度的更新步骤数。

<提示警告={true}>

当使用梯度累积时,一步被计为反向通过的一步。因此,测井,
评估时,将对每个“gradientaccumulator_steps*xxxstep”训练示例进行保存。

</提示>

eval_累加_步长('int'*可选*):
在将结果移动到CPU之前,累积输出张量的预测步骤数。如果
如果未设置,则整个预测在移动到CPU之前会累积在GPU/NPU/TPU上(速度更快,但
需要更多内存)。
eval_delay(浮点数,*可选*):
在执行第一次评估之前要等待的时间或步骤数,具体取决于
eval策略。
torch_empty_cache_steps(`int`,*可选*):
调用“torch”之前要等待的步骤数<设备>.empty_cache()`。如果未设置或设置为“无”,缓存将不会被清空。

<提示>

这可以通过降低峰值VRAM使用率来帮助避免CUDA内存不足错误,但代价是性能降低约[10%](https://github.com/huggingface/transformers/issues/31372).

</提示>

learning_rate('float'*可选*,默认为5e-5):
[AdamW]优化器的初始学习率。
weight_decay('float'*可选*,默认为0):
权重衰减适用于所有层(如果不是零),但[AdamW]中的所有偏差和LayerNorm权重除外
优化器。
adam_beta1('float'*可选*,默认为0.9):
[AdamW]优化器的beta1超参数。
adam_beta2('float'*可选*,默认为0.999):
[AdamW]优化器的beta2超参数。
adam_epsilon(`float`,*可选*,默认为1e-8):
[AdamW]优化器的epsilon超参数。
max_grad_norm(`float`,*可选*,默认为1.0):
最大梯度范数(用于梯度剪裁)。
num_train_epochs(`float`,*可选*,默认为3.0):
要执行的训练周期总数(如果不是整数,将执行以下操作的小数部分百分比
停止训练前的最后一个阶段)。
max_steps(`int`,*可选*,默认为-1):
如果设置为正数,则表示要执行的训练步骤总数。覆盖`num_train_epochs`。
对于有限的数据集,通过数据集重复训练(如果所有数据都用完了),直到
`已达到max_steps。
lr_scheduler_type(`str`或[`SchedulerType`]*可选*,默认为`“线性”`):
要使用的计划程序类型。有关所有可能的值,请参阅[`SchedulerType']的文档。
lr_scheduler_kwargs('dict'*可选*,默认为{}):
lr_scheduler的额外参数。有关可能的值,请参阅每个调度器的文档。
warmup_ratio('float'*可选*,默认为0.0):
用于从0到“learning_rate”的线性预热的总训练步骤的比率。
warmup_steps(`int`,*可选*,默认为0):
从0到“learning_rate”的线性预热所用的步骤数。覆盖“预热比率”的任何影响。
log_level(`str`,*可选*,默认为`passive`):
主进程上使用的记录器日志级别。可能的选择是将日志级别设置为字符串:“debug”,
“信息”、“警告”、“错误”和“严重”,加上一个“被动”级别,该级别不设置任何内容,并保持
Transformers库的当前日志级别(默认情况下为“警告”)。
log_level_replica(`str`,*可选*,默认为`“warning”`):
要在副本上使用的记录器日志级别。与`log_level`相同的选项”
log_on_each_node(`bool`,*可选*,默认为`True`):
在多节点分布式训练中,是每个节点使用“log_level”登录一次,还是仅在主节点上登录
节点。
logging_dir(`str`,*可选*):
[TensorBoard](https://www.tensorflow.org/tensorboard)日志目录。将默认为
*output_dir/runs/**当前数据时间主机名***。
logging_strategy(`str`或[`~trainer_utils.IntervalStrategy`]*可选*,默认为`步骤'):
培训期间采用的测井策略。可能的值有:

-“否”:训练期间不进行记录。
-“epoch”:在每个epoch结束时进行日志记录。
-“步骤”:每一个“Logging_step”都会进行日志记录。

logging_first_step(`bool`,*可选*,默认为`False `):
是否记录第一个“global_step”。
logging_step(`int`或`float`,*可选*,默认值为500):
如果`logging_strategy=“steps”`,则两个日志之间的更新步骤数。中应为整数或浮点数
范围“[0,1)”。如果小于1,将被解释为总训练步骤的比率。
logging_nan_inf_filter(`bool`,*可选*,默认为`True`):
是否过滤日志记录的“nan”和“inf”损失。如果设置为“True”,则表示“nan”的每一步都会丢失`
或者过滤“inf”,并取当前日志窗口的平均损耗。

<提示>

`logging_nan_inf_filter只影响损失值的记录,它不会改变
计算梯度或将梯度应用于模型。

</提示>

save_strategy(`str`或[`~trainer_utils.IntervalStrategy`]*可选*,默认为`步骤'):
训练期间采用的检查点保存策略。可能的值有:

-“否”:训练期间不进行任何保存。
-“纪元”:保存在每个纪元结束时完成。
-`“steps”`:每次`Save_steps`都会进行保存。

如果选择“纪元”或“步骤”,也将在
训练结束时,总是这样。
save_steps(`int`或`float`,*可选*,默认值为500):
如果`save_strategy=“steps”`,则两个检查点保存前的更新步骤数。应该是整数或
浮点数在[0,1)范围内。如果小于1,将被解释为总训练步骤的比率。
save_total_limit(`int`,*可选*):
如果传递了一个值,将限制检查点的总数。删除中的旧检查点
`output_dir`。当启用`load_best_model_at_end`时,根据
`除了最新的模型外,metric_for_best_model也将始终保留。例如,对于
`save_total_limit=5和load_best_model_at_end,最后四个检查点将始终保留
最好的模型旁边。当`save_total_limit=1 `和`load_best_model_at_end`时,可能有两个
检查点被保存:最后一个和最好的检查点(如果它们不同)。
save_safe张量(`bool`,*可选*,默认为`True`):
使用[安全张量](https://huggingface.co/docs/safetensors)保存和加载状态字典,而不是
默认的“torch.load”和“torch.save”。
save_on_each_node(`bool `,*可选*,默认为`False `):
在进行多节点分布式训练时,是在每个节点上保存模型和检查点,还是仅在
主要的一个。

当不同节点使用与文件保存时相同的存储时,不应激活此功能
每个节点的名称相同。
save_only_model(`bool`,*可选*,默认为`False `):
检查点时,是只保存模型,还是同时保存优化器、调度器和rng状态。
请注意,当这是真的时,您将无法从检查点恢复训练。
这使您能够通过不存储优化器、调度器和rng状态来节省存储空间。
您只能使用“from_petrained”加载模型,并将此选项设置为“True”。
restore_callback_states_from_checkpoint(`bool `,*可选*,默认为`False `):
是否从检查点恢复回调状态。如果为True,将覆盖
如果检查点中存在回调,则将其传递给“Trainer”。"
use_cpu(`bool `,*可选*,默认为`False `):
是否使用cpu。如果设置为False,我们将使用cuda或mps设备(如果可用)。
seed(`int`,*可选*,默认为42):
随机种子将在训练开始时设置。为确保运行的可重复性,请使用
[`~Trainer.model_init`]函数,如果模型有一些随机初始化的参数,则实例化模型。
data_sed(`int`,*可选*):
随机种子与数据采样器一起使用。如果未设置,用于数据采样的随机生成器将使用
与“种子”相同的种子。这可用于确保数据采样的可重复性,与模型无关
种子。
jit_mode_eval(`bool`,*可选*,默认为`False’):
是否使用PyTorch jit跟踪进行推理。
use_ipex(`bool`,*可选*,默认为`False `):
在PyTorch的Intel扩展可用时使用它。[IPEX
安装](https://github.com/intel/intel-extension-for-pytorch).
bf16(`bool `,*可选*,默认为`False `):
是否使用bf16 16位(混合)精度训练代替32位训练。需要安培或更高电流
NVIDIA架构或使用CPU(use_CPU)或Ascend NPU。这是一个实验性的API,它可能会改变。
fp16(`bool `,*可选*,默认为`False `):
是否使用fp16 16位(混合)精度训练代替32位训练。
fp16_opt_level(`str`,*可选*,默认为'O1'):
对于“fp16”训练,在[0”、“O1”、“O2”和“O3”]中选择Apex AMP优化级别。查看详细信息
[Apex文档](https://nvidia.github.io/apex/amp).
fp16_backend(`str`,*可选*,默认为`auto`):
此论点已被弃用。请改用“half_precision_backend”。
half_precision_backend(`str`,*可选*,默认为`auto`):
用于混合精度训练的后端。必须是“auto”、“apex”、“cpu_amp”之一`“自动”将
根据检测到的PyTorch版本,使用CPU/CUDA AMP或APEX,而其他选项将强制
请求后端。
bf16_full_eval(`bool`,*可选*,默认为`False `):
是否使用完整的bfloat16求值而不是32位。这将更快并节省内存,但可能会造成伤害
度量值。这是一个实验性的API,它可能会改变。
fp16_full_eval(`bool`,*可选*,默认为`False’):
是否使用完整的float16求值而不是32位。这将更快并节省内存,但可能会造成损害
度量值。
tf32(`bool`,*可选*):
是否启用Ampere和较新GPU架构中可用的TF32模式。默认值取决于
PyTorch的版本默认值为“torch.backends.cuda.matmul.allow_tf32”。有关更多详细信息,请参阅
[TF32](https://huggingface.co/docs/transformers/performance#tf32)文件。这是一个
实验API,并且它可能改变。
local_rank(`int`,*可选*,默认为-1):
分布式训练期间进程的排名。
ddp_bend(`str`,*可选*):
用于分布式培训的后端。必须是“nccl”、“mpi”、“ccl”、“gloo”、“hccl”之一。
tpu_num_cores('int'*可选*):
在TPU上训练时,TPU核心的数量(由启动器脚本自动传递)。
dataloader_drop_last(`bool `,*可选*,默认为`False `):
是否删除最后一个不完整的批(如果数据集的长度不能被批大小整除)
或者不。
eval_steps(intfloat*可选*):
如果`eval_strategy=“steps”`,则两次评估之间的更新步骤数。将默认为相同
如果未设置,则将值设置为“logging_step”。应为“[0,1)”范围内的整数或浮点数。如果小于1,
将被解释为总训练步骤的比率。
dataloader_num_workers(`int`,*可选*,默认为0):
用于数据加载的子进程数(仅限PyTorch)。0表示数据将加载到
主要过程。
past_index(`int`,*可选*,默认为-1):
一些模型,如[TransformerXL]../model_doc/transformersxl)或[XLNet]..model_doc/XLNet),可以使用
过去隐藏的状态为他们的预测。如果此参数设置为正整数,则“Trainer”将
使用相应的输出(通常是索引2)作为过去的状态,并在下一次将其提供给模型
在关键字参数“mems”下的训练步骤。
run_name(`str`,*可选*,默认为`output_dir`):
运行的描述符。通常用于[魔杖](https://www.wandb.com/),
[mlflow](https://www.mlflow.org/)[彗星](https://www.comet.com/site)测井。如果没有指定,将
与`output_dir`相同。
disable_tqdm(`bool`,*可选*):
是否禁用tqdm进度条和指标表
Jupyter笔记本中的[`~notebook.NotebookTrainingTracker `]。如果日志记录级别为
设置为警告或更低(默认),否则为“False”。
remove_unused_columns(`bool`,*可选*,默认为`True`):
是否自动删除模型转发方法未使用的列。
label_names(`List[str]`,*可选*):
与标签对应的输入字典中的键列表。

最终将默认为模型接受的包含单词“标签”的参数名称列表,
除非使用的模型是“XxxForQuestionAnswering”模型之一,在这种情况下,它还将包括
`[“开始位置”、“结束位置”]键。
load_best_model_at_end(`bool `,*可选*,默认为`False `):
是否在训练结束时加载训练过程中发现的最佳模型。当此选项为
启用后,将始终保存最佳检查点。请参阅
[`save_total_limit`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.save_total_limit)
更多。

<提示>

当设置为“True”时,参数“save_tegistry”需要与“eval_tegister”相同,并且
在“step”的情况下,`save-stages`必须是`eval-stages`的整数倍。

</提示>

metric_for_best_model(`str`,*可选*):
与`load_best_model_at_end`结合使用,指定用于比较两个不同对象的度量
模型。必须是计算返回的度量值的名称,可以带或不带前缀“eval_”。威尔
如果未指定,则默认为“loss”,并且“load_best_model_at_end=True”(使用评估损失)。

如果你设置了这个值,`greater_is_better`将默认为`True`。如果发生以下情况,不要忘记将其设置为“False”
你的指标越低越好。
greater_is_better('bool'*可选*):
与“load_best_model_at_end”和“metric_for_best_mode”结合使用,以指定是否有更好的模型
是否应该有更大的度量。将默认为:

-如果`metric_for_best_model`设置为不以`loss'结尾的值,则为`True `。
-如果未设置“metric_for_best_model”,或设置为以“loss”结尾的值,则为“False”。

ignore_data_skip(`bool `,*可选*,默认为`False `):
恢复训练时,是否跳过epoch和batch以同时加载数据
与之前的训练一样。如果设置为“True”,训练将更快地开始(就像跳过步骤一样
可能需要很长时间),但不会产生与中断训练相同的结果。
fsdp(`bool `、`str `或[`~trainer_utils.FSDPOption`]列表,*optional*,默认为`''):
使用PyTorch分布式并行训练(仅适用于分布式训练)。

选项列表如下:

-“full_shard”:分片参数、梯度和优化器状态。
-“shard_grad_op”:分片优化器状态和梯度。
-“hybrid_shard”:在节点内应用“FULL_shard”,并在节点间复制参数。
-`“hybrid_shard_zero2”`:在节点内应用`shard_GRAD_OP`,并在节点间复制参数。
-“卸载”:将参数和梯度卸载到CPU(仅与“full_shard”和
`“shard_grad_op”)。
-“auto_wrap”:使用“default_auto_wrap_policy”自动递归地用FSDP包裹层。
fsdp_config(`str`或`dict`,*可选*):
与fsdp(Pytorch分布式并行训练)一起使用的配置。该值要么是位置
fsdp-json配置文件(例如`fsdp_config.json`)或已加载的json文件作为`dict`。

配置及其选项列表:
-min_num_params(`int`,*可选*,默认为`0`):
FSDP默认自动包裹的最小参数数。(仅当“fsdp”字段为
通过)。
-transformer_layer_cls_to_wrap(`List[str]`,*可选*):
要包装的变压器层类名列表(区分大小写),例如“BertLayer”、“GPTJBlock”,
`T5块`。。。。(仅在传递“fsdp”标志时有用)。
-backward_prefatch(`str`,*可选*)
FSDP的反向预取模式。控制何时预取下一组参数(仅在以下情况下有用
`传递fsdp字段)。

选项列表如下:

-“backward_pre”:预取当前参数集之前的下一组参数
梯度
计算。
-“backward_post”:这将预取当前参数集之后的下一组参数
参数
梯度计算。
-forward_prefatch(`bool`,*可选*,默认为`False `)
FSDP的前向预取模式(仅在传递“FSDP”字段时有用)。
如果为“True”,则FSDP在执行时显式预取下一个即将到来的全部聚集
向前传球。
-limit_all_gathers(`bool`,*可选*,默认为`False `)
FSDP的limit_all_gathers(仅在传递“FSDP”字段时有用)。
如果为“True”,FSDP会显式同步CPU线程,以防止太多正在运行的线程
所有聚集。
-use_orig_params('bool'*可选*,默认为'True')
如果“True”,则允许在初始化过程中使用非统一的“requires_grad”,这意味着支持散布
冷冻和可训练的参数。在参数高效微调等情况下很有用。拜托
参考这个
[博客](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019
-sync_module_states(`bool`,*可选*,默认为`True`)
如果为“True”,则每个单独包装的FSDP单元将从秩0广播模块参数到
确保初始化后所有列中的值都相同
-cpu_ram_efficient_loading(`bool `,*可选*,默认为`False `)
如果“True”,则只有第一个进程加载预训练的模型检查点,而所有其他进程都加载
有空的重量。当此设置为“True”时,“sync_module_states”也必须为“True“,
否则,除主进程外的所有进程都将具有随机权重,从而导致意外
训练期间的行为。
-activation_checkpointing(`bool `,*可选*,默认为`False `):
如果“True”,激活检查点是一种通过清除以下激活来减少内存使用的技术
在反向传球过程中重新计算某些图层。实际上,这会产生额外的交易
减少内存使用的计算时间。
-xla(`bool `,*可选*,默认为`False `):
是否使用PyTorch/XLA全共享数据并行训练。这是一个实验性功能
其API可能在未来发展。
-xla_fsdp_settings(“字典”,*可选*)
该值是存储XLA FSDP包装参数的字典。

有关选项的完整列表,请参阅[此处](
https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_fully_sharded_data_parallel.py).
-xla_fsdp_grad_ckpt('bool'*可选*,默认为'False'):
将在每个嵌套的XLA FSDP包裹层上使用梯度检查点。此设置只能是
当xla标志设置为true,并通过指定自动包装策略时使用
fsdp_min_num_params或fsdp_transformer_layer_cls_to_wrap。

deepspeed(`str`或`dict`,*可选*):
使用[深度速度](https://github.com/microsoft/deepspeed). 这是一个实验性功能,其API可能
在未来发展。该值是DeepSpeed json配置文件的位置(例如。,
`ds_config.json)或已加载的json文件作为dict<提示警告={true}>
如果启用任何Zero init,请确保您的模型在以下时间之前未初始化
**初始化“TrainingArguments”后,否则将不应用。
</提示>

accelerator_config(`str`、`dict`或`AcceleratorConfig `,*可选*):
与内部“Accelerator”实现一起使用的配置。该值要么是位置
加速器json配置文件(例如`accelerator_config.json`),已加载的json文件为`dict`,
或者[`~trainer_pt_utils.AcceleratorConfig`]的实例。

配置及其选项列表:
-split_batches(`bool`,*可选*,默认为`False `):
加速器是否应将数据加载器生成的批分割到设备上。如果
`True `实际使用的批大小在任何类型的分布式进程上都是相同的,但它必须是
四舍五入您正在使用的`num_processes`的倍数。如果为“False”,则实际使用的批大小将是设置的批大小
在脚本中乘以进程数。
-dispatch_batches(`bool`,*可选*):
如果设置为“True”,则加速器准备的数据加载器仅在主进程上迭代
然后将批分割并广播给每个进程。对于“DataLoader”,将默认为“True”,其
底层数据集是“IterableDataset”,否则为“False”。
-even_bases('bool'*可选*,默认为'True'):
如果设置为“True”,则在所有流程的总批处理大小不能完全除以
数据集,数据集开头的样本将被复制,这样批次就可以平均分配给
所有工人。
-use_seedable_sampler('bool'*可选*,默认为'True'):
是否使用完全可种子随机采样器([`accelerate.data_loader.SeedableRandomSampler`])。确保
使用不同的采样技术,训练结果可以完全再现。虽然种子到种子的结果
平均而言,当使用多个不同的种子进行比较时,差异是可以忽略不计的。应该
也可以使用[`~utils.set_sed`运行,以获得最佳效果。
-use_configured_state(`bool`,*可选*,默认为`False `):
是否使用在调用“TrainingArguments”之前定义的预配置的“AcceleratorState”或“PartialState”。
如果为True,则必须初始化Accelerator或PartialState。请注意,这样做可能会导致问题
通过超参数调整。
label_smoothing_factor(浮点数,*可选*,默认为0.0):
要使用的标签平滑因子。零表示没有标签平滑,否则底层onehot编码
标签从0s和1s更改为“label_smoothing_factor/num_labels”和“1-label_soothing_factor”+
分别为label_smoothing_factor和num_labels。
debug(`str`或[`~debug_utils.DebugOption`]列表,*可选*,默认为`“”`):
启用一个或多个调试功能。这是一个实验性功能。

可能的选项有:

-“underflow_overflow”:检测模型输入/输出中的溢出,并报告导致溢出的最后一帧
事件
-“tpu_metrics_debug”:在tpu上打印调试指标

选项之间应该用空格隔开。
optim(`str`或[`training_args.OptimizerName`]*可选*,默认为`“adamw_torch”`):
要使用的优化器:adamw_hf、adamw_torch、adamw_torch_fused、adamw_apex_fused、adamw_anyprecision或
适应因素。
optim_args(`str`,*可选*):
提供给AnyPrecisionDamW的可选参数。
group_by_length(`bool `,*可选*,默认为`False `):
是否将训练数据集中长度大致相同的样本组合在一起(以最小化
使用衬垫,提高效率)。仅在应用动态填充时有用。
length_column_name(`str`,*可选*,默认为`“length”`):
预先计算长度的列名。如果该列存在,则按长度分组将使用这些值
而不是在火车启动时计算它们。忽略,除非`group_by_length`为`True`,并且数据集是
`Dataset`的实例。
report_to(`str`或`List[str]`,*可选*,默认为`“all”`):
要向其报告结果和日志的集成列表。支持的平台是“azure _ml”,
`“clearml”、“codecarbon”、“come_ml”、“dagshub”、“dvclive”、“flyte”、“mlflow”、“neptune”,
`“tenserboard”和“wandb”。使用“all”向所有已安装的集成报告,使用“none”表示没有
整合。
ddp_find_unused_parameters('bool'*可选*):
当使用分布式训练时,传递给
`分布式并行数据`。如果使用梯度检查点,将默认为“False”,否则为“True”。
ddp_backet_cap_mb(`int`,*可选*):
使用分布式训练时,将标志“bucket_cap_mb”的值传递给“DistributedDataParallel”。
ddp_broadcast_buffers('bool'*可选*):
当使用分布式训练时,传递给的标志“broadcast_buffers”的值
`分布式并行数据`。如果使用梯度检查点,将默认为“False”,否则为“True”。
dataloader_pin_memory(`bool`,*可选*,默认为`True`):
无论您是否希望在数据加载器中固定内存。将默认为“True”。
dataloader_persistent_workers(`bool `,*可选*,默认为`False `):
如果为True,则数据加载器在数据集被使用一次后不会关闭工作进程。
这允许保持workers Dataset实例处于活动状态。可能会加快训练速度,但会
增加RAM使用率。将默认为“False”。
dataloader_prepetch_factor(`int`,*可选*):
每个工人提前装载的批次数。
2表示在所有工人中预取总共2*num_workers批。
skip_mery_metrics('bool'*可选*,默认为'True'):
是否跳过将内存分析器报告添加到度量中。默认情况下会跳过此操作,因为它会变慢
降低训练和评估速度。
push_to_hub(`bool `,*可选*,默认为`False `):
是否在每次保存模型时将模型推送到Hub。如果该功能被激活,
`output_dir将启动一个与repo(由hub_model_id确定)和内容同步的git目录
每次触发保存时都会被推送(取决于您的“保存策略”)。打电话
[`~Trainer.save_model`]也将触发推送。

<提示警告={true}>

如果`output_dir`存在,它需要是[`Trainer`]将要访问的存储库的本地克隆
推。

</提示>

resume_from_checkpoint(`str`,*可选*):
指向具有模型有效检查点的文件夹的路径。此论点未被直接使用
[“培训师”],它旨在供您的培训/评估脚本使用。参见[示例
脚本](https://github.com/huggingface/transformers/tree/main/examples)了解更多详情。
hub_model_id(`str`,*可选*):
与本地*output_dir*保持同步的存储库名称。它可以是一个简单的模型ID
在这种情况下,模型将被推送到您的命名空间中。否则,它应该是整个存储库名称,
例如“username/model”,它允许您向您所属的组织推送
`“组织名称/模型”`。将默认为`user_name/output_dir_name`,其中*output_dir_name*为
`output_dir`的名称。

将默认为“output_dir”的名称。
hub_strategy(`str`或[`~trainer_utils.HubStrategy`]*可选*,默认为`“every_save”`):
定义推送到Hub的内容范围和时间。可能的值有:

-“end”:推送模型、其配置、标记器(如果传递给[训练器])和
调用[`~Trainer.save_model`]方法时模型卡的草稿。
-“every_save”:推送模型、其配置、标记器(如果传递给[训练器])和
每次保存模型时,都要绘制一张模型卡的草稿。推送是异步的,不会阻塞
训练,如果保存非常频繁,只有在前一次推送失败的情况下才会尝试新的推送
完成。训练结束时,用最终模型进行最后一次推压。
-“checkpoint”:类似于“every_save”,但最新的检查点也被推送到名为的子文件夹中
最后一个检查点,允许您轻松恢复训练
`trainer.train(resume_from_checkpoint=“最后一个检查点”)`。
-“all_checkpoints”:类似于“checkpoint”,但所有检查点都会像输出中显示的那样被推送
文件夹(因此您将在最终存储库中的每个文件夹中获得一个检查点文件夹)

hub_token(`str`,*可选*):
用于将模型推送到Hub的令牌。将默认使用缓存文件夹中的令牌
`hugginface cli登录。
hub_private_repo(`bool`,*可选*,默认为`False `):
如果为True,则Hub repo将设置为private。
hub_always_push(`bool `,*可选*,默认为`False `):
除非这是“True”,否则“Trainer”将在上一次推送未完成时跳过推送检查点。
gradient_checkpointing(`bool`,*可选*,默认为`False’):
如果为True,则使用梯度检查点以较慢的向后传递为代价来节省内存。
gradient_checkpointing_kwargs(“dict”,*可选*,默认为“None”):
要传递给“gradient_checkpointing_enable”方法的关键字参数。
include_inputs_for_metrics('bool'*可选*,默认为'False'):
输入是否会传递给“compute_metrics”函数。这是用于度量的
在公制类中,需要输入、预测和参考来进行评分计算。
eval.do_concat_batches('bool'*可选*,默认为'True'):
是否在批次之间递归地关联输入/损失/标签/预测。如果为“假”,
而是将它们存储为列表,每批都单独保存。
auto_find_batch_size(`bool`,*可选*,默认为`False `)
是否通过指数衰减自动找到适合内存的批大小,避免
CUDA内存不足错误。需要加速才能安装(`pip install accelerate`)
full_determinism(`bool `,*可选*,默认为`False `)
如果为“True”,则调用[“enable_full_determinism”]而不是[“set_seed”],以确保在
分布式培训。重要提示:这将对性能产生负面影响,因此只能将其用于调试。
torchdynamor(`str`,*可选*):
如果设置,TorchDynamo的后端编译器。可能的选择是“渴望”、“aot_eager”、“电感器”、,
`“nvfuser”、“aot_nvfuser“、”“aot_cudagraphs”、“ofi”、“fx2trt”、“onnxrt”和“ipex”。
ray_scope(`str`,*可选*,默认为`last`):
使用Ray进行超参数搜索时使用的范围。默认情况下,将使用“last”。雷会
然后使用所有试验的最后一个检查点,比较这些检查点,并选择最佳检查点。然而,其他选择
也可用。请参阅[Ray文档](
https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial)for
更多选择。
ddp_tetime(`int`,*可选*,默认为1800):
“torch.distributed.init_process_group”调用的超时,用于在以下情况下避免GPU套接字超时
在分布式运行中执行缓慢的操作。请参考[PyThorch文档]
(https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group)了解更多
信息。
use_mps_device(`bool `,*可选*,默认为`False `):
此论点已被弃用`如果mps设备与cuda设备类似,则将使用它。
torch_compile(`bool`,*可选*,默认为`False `):
是否使用PyTorch 2.0编译模型
[火炬编译](https://pytorch.org/get-started/pytorch-2.0/).

这将使用[`torch.compile的最佳默认值`
APIhttps://pytorch.org/docs/stable/generated/torch.compile.html?highlight=torch+compile#torch.compile)。
您可以使用参数“torch_compile_backend”和“torch-compile_mode”自定义默认值,但我们
不保证它们中的任何一个都能正常工作,因为PyTorch中的支持是逐步引入的。

这个标志和整个编译API是实验性的,在未来的版本中可能会发生变化。
torch_compile_backend(`str`,*可选*):
在“torch.compile”中使用的后端。如果设置为任何值,'torch_compile'将设置为'True'。

有关可能的值,请参阅PyTorch文档,并注意它们可能会在PyTorch版本之间发生变化。

此标志是实验性的,在未来的版本中可能会发生变化。
torch_compile_mode(`str`,*可选*):
在“torch.compile”中使用的模式。如果设置为任何值,'torch_compile'将设置为'True'。

有关可能的值,请参阅PyTorch文档,并注意它们可能会在PyTorch版本之间发生变化。

此标志是实验性的,在未来的版本中可能会发生变化。
split_batches(`bool`,*可选*):
加速器是否应将数据加载器生成的批次拆分到设备上
在分布式训练期间。如果

如果设置为“True”,则实际使用的批大小在任何类型的分布式进程上都是相同的,但
必须是a

四舍五入您正在使用的进程数量的倍数(如GPU)。
include_tokens_per_second(`bool`,*可选*):
是否计算每个设备每秒用于训练速度指标的令牌数量。

这将预先迭代整个训练数据加载器一次,

并且将减慢整个过程。

include_num_input_tokens_sen(`bool`,*可选*):
是否跟踪整个训练过程中看到的输入令牌的数量。

在分布式训练中可能会较慢,因为必须调用聚集操作。

neftune_noise_alpha(可选[float]):
如果不是“无”,这将激活NEFTune噪声嵌入。这可以大大提高模型性能
用于指令微调。查看[原始论文](https://arxiv.org/abs/2310.05914)以及
[原始代码](https://github.com/neelsjain/NEFTune). 支持变压器“PreTrainingModel”以及
`PeftModel来自peft。
optim_target_modules(`Union[str,List[str]]`,*可选*):
要优化的目标模块,即您要训练的模块名称,目前仅用于GaLore算法
https://arxiv.org/abs/2403.03507
请参阅:https://github.com/jiaweizzhao/GaLore了解更多详情。您需要确保通过有效的GaloRe
优化器,例如:“galore_adamw”、“galoreadamw8bit”、“galore_adafactor”中的一个,并确保目标模块是`nn。线性模块
只有。

batch_eval_metrics(可选[bool],默认为False):
如果设置为“True”,评估将在每批结束时调用compute_metrics来累积统计数据
而不是将所有eval逻辑保存在内存中。当设置为“True”时,您必须传递一个compute_metrics函数
它接受一个布尔参数“compute_result”,当传递“True”时,将触发最终的全局
来自您在评估集中累积的批处理级别摘要统计的摘要统计。

eval_on_start(`bool `,*可选*,默认为`False `):
是否在培训前执行评估步骤(健全性检查),以确保验证步骤正确工作。

eval_use_gather_object(`bool `,*可选*,默认为`False `):
是否递归运行,从所有设备的嵌套列表/元组/字典中收集对象。只有当用户不只是返回张量时,才应该启用此功能,而PyTorch极力反对这种做法。

<think>好的,我需要帮用户解决关于使用Python实现SVM与TF-IDF结合分类模型、BERT文本分类以及正则表达式提取特征后用于深度学习模型的代码示例。首先,我得确认用户的具体需求,可能需要分步骤回答每个部分。 首先,SVM和TF-IDF结合是传统的文本分类方法。我应该先介绍如何用scikit-learn中的TfidfVectorizer处理文本,然后用SVM进行分类。需要给出代码示例,并说明步骤,比如数据准备、特征提取、模型训练和评估。这部分相对直接,但要注意参数设置和可能的常见问题,比如数据分割、特征维度等。 接下来是BERT文本分类。用户可能需要使用Hugging Face的Transformers库,这里需要加载预训练模型,准备数据集,进行训练和评估。需要注意的是,BERT需要特定的输入格式,比如tokenization和attention masks。代码示例应包括数据预处理、模型定义、训练循环和评估指标。还要提醒用户安装必要的库,比如transformers和datasets。 第三部分是正则表达式提取特征后用于深度学习模型。这里可能需要先通过正则表达式提取某些文本特征,比如统计特定模式的出现次数,然后将这些特征作为输入到神经网络中。例如,使用Keras构建一个简单的全连接网络。需要展示如何从文本中提取特征,并将其转换为数值向量,再输入模型进行训练。同时要说明特征工程的重要性,以及如何结合其他方法提升性能。 在组织回答时,要确保每个部分的代码示例正确,并附带必要的解释,帮助用户理解每一步的作用。同时,引用相关的参考文献,比如用户提供的引用[1]中的特征选择方法,可能可以链接到特征工程的部分。此外,用户提到的示例驱动提示[^2]可能适用于如何选择示例数据或调整模型参数。 需要注意的潜在问题包括:代码兼容性(如Python版本、库版本)、数据处理步骤的完整性(如文本清洗、标签编码)、模型超参数的调整建议等。此外,对于BERT部分,计算资源的要求(如GPU加速)也是需要考虑的因素,可能需要提醒用户。 最后,生成的相关问题需要与主题相关,帮助用户深入理解每个模型的优缺点、应用场景和优化方法。例如,SVM的调参技巧、BERT的微调策略、正则表达式特征的有效性评估等。</think>以下分别提供SVM+TF-IDF、BERT文本分类和正则表达式特征提取+深度学习模型的Python实现示例: --- ### 一、SVM与TF-IDF结合分类模型 ```python from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import pandas as pd # 示例数据(替换为实际数据) texts = ["This is positive", "Negative comment", ...] labels = [1, 0, ...] # TF-IDF特征提取 tfidf = TfidfVectorizer(max_features=5000) X = tfidf.fit_transform(texts) # 数据划分 X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.2) # SVM模型训练 svm = SVC(kernel='linear', C=1.0) svm.fit(X_train, y_train) # 评估 y_pred = svm.predict(X_test) print(f"Accuracy: {accuracy_score(y_test, y_pred):.2f}") ``` 关键参数说明: - `max_features`控制特征维度 - SVM的`kernel`可选择rbf/poly等非线性核 --- ### 二、BERT文本分类实现 ```python from transformers import BertTokenizer, BertForSequenceClassification from transformers import Trainer, TrainingArguments import torch from datasets import Dataset # 准备数据(示例格式) data = {'text': ["text1", "text2"], 'label': [0, 1]} dataset = Dataset.from_dict(data) # 分词处理 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def tokenize(batch): return tokenizer(batch['text'], padding=True, truncation=True) dataset = dataset.map(tokenize, batched=True) # 定义模型 model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) # 训练配置 training_args = TrainingArguments( output_dir='./results', num_train_epochs=3, per_device_train_batch_size=8, evaluation_strategy="epoch" ) # 训练器 trainer = Trainer( model=model, args=training_args, train_dataset=dataset ) # 开始训练 trainer.train() ``` 注意: 1. 需要安装`transformers`和`datasets`库 2. 建议在GPU环境下运行 3. 可添加`DataCollatorWithPadding`提升批处理效率 --- ### 三、正则表达式特征+深度学习 ```python import re import numpy as np from keras.models import Sequential from keras.layers import Dense # 正则特征提取示例 def extract_features(texts): features = [] for text in texts: # 示例特征:统计数字、大写字母、特殊符号数量 num_count = len(re.findall(r'\d', text)) upper_count = len(re.findall(r'[A-Z]', text)) spec_char = len(re.findall(r'[!@#$%^&*()]', text)) features.append([num_count, upper_count, spec_char]) return np.array(features) # 生成特征 texts = ["Sample123!", "Test456@", ...] X = extract_features(texts) y = np.array([0, 1, ...]) # 标签 # 深度学习模型 model = Sequential([ Dense(16, activation='relu', input_shape=(3,)), Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # 训练 model.fit(X, y, epochs=10, validation_split=0.2) ``` 特征工程建议: - 可结合词性标注模式(如`r'\b[A-Z][a-z]+\b'`匹配专有名词) - 添加n-gram模式统计 - 组合TF-IDF特征[^1] ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

LuckyTHP

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值