FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead

作者在Python中使用pandas处理json数据时遇到FutureWarning,因为pandas.io.json模块已弃用。他们寻求帮助,指出应将`frompandas.io.jsonimportjson_normalize`替换为`frompandasimportjson_normalize`。

如何解决我在 Python 中使用 pandas.json 遇到的问题?

我正在尝试在 Python 中运行代码。

我上传的库如下:

import requests
import json
from datetime import datetime
import pandas as pd
import re
from pandas.io.json import json_normalize
Run Code Online (Sandbox Code Playgroud)
当我尝试从网站提取信息时,我收到以下错误:
FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead

我究竟做错了什么?

Pandas 现在包含 json_normalize 函数,因为 pandas.io.json 已被弃用。

from pandas.io.json import json_normalizet
替换成:
from pandas import json_normalize

recbole里运行得到的结果(recommendation1) PS C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master> python test_recbole.py --dataset=ml-100k --model=DeepFM 17 Dec 10:17 INFO ['test_recbole.py', '--dataset=ml-100k', '--model=DeepFM'] 17 Dec 10:17 INFO General Hyper Parameters: gpu_id = 0 use_gpu = True seed = 2020 state = INFO reproducibility = True data_path = C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\config\../dataset_example/ml-100k checkpoint_dir = saved show_progress = True save_dataset = False dataset_save_path = None save_dataloaders = False dataloaders_save_path = None log_wandb = False Training Hyper Parameters: epochs = 300 train_batch_size = 2048 learner = adam learning_rate = 0.001 train_neg_sample_args = {'distribution': 'none', 'sample_num': 'none', 'alpha': 'none', 'dynamic': False, 'candidate_num': 0} eval_step = 1 stopping_step = 10 clip_grad_norm = None weight_decay = 0.0 loss_decimal_place = 4 Evaluation Hyper Parameters: eval_args = {'split': {'RS': [0.8, 0.1, 0.1]}, 'order': 'RO', 'group_by': None, 'mode': {'valid': 'labeled', 'test': 'labeled'}} repeatable = False metrics = ['AUC', 'LogLoss'] topk = [10] valid_metric = AUC valid_metric_bigger = True eval_batch_size = 4096 metric_decimal_place = 4 Dataset Hyper Parameters: field_separator = seq_separator = USER_ID_FIELD = user_id ITEM_ID_FIELD = item_id RATING_FIELD = rating TIME_FIELD = timestamp seq_len = None LABEL_FIELD = label threshold = {'rating': 4} NEG_PREFIX = neg_ load_col = {'inter': ['user_id', 'item_id', 'rating', 'timestamp'], 'user': ['user_id', 'age', 'gender', 'occupation'], 'item': ['item_id', 'release_year', 'class']} unload_col = None unused_col = None additional_feat_suffix = None rm_dup_inter = None val_interval = None filter_inter_by_user_or_item = True user_inter_num_interval = None item_inter_num_interval = None alias_of_user_id = None alias_of_item_id = None alias_of_entity_id = None alias_of_relation_id = None preload_weight = None normalize_field = None normalize_all = True ITEM_LIST_LENGTH_FIELD = item_length LIST_SUFFIX = _list MAX_ITEM_LIST_LENGTH = 50 POSITION_FIELD = position_id HEAD_ENTITY_ID_FIELD = head_id TAIL_ENTITY_ID_FIELD = tail_id RELATION_ID_FIELD = relation_id ENTITY_ID_FIELD = entity_id kg_reverse_r = False entity_kg_num_interval = None relation_kg_num_interval = None benchmark_filename = None Other Hyper Parameters: worker = 0 wandb_project = recbole shuffle = True require_pow = False enable_amp = False enable_scaler = False transform = None embedding_size = 32 mlp_hidden_size = [128, 128, 128] dropout_prob = 0.1 batch_size = 512-2048 numerical_features = [] discretization = None MODEL_TYPE = ModelType.CONTEXT MODEL_INPUT_TYPE = InputType.POINTWISE eval_type = EvaluatorType.VALUE single_spec = True local_rank = 0 device = cuda valid_neg_sample_args = {'distribution': 'none', 'sample_num': 'none'} test_neg_sample_args = {'distribution': 'none', 'sample_num': 'none'} C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\data\dataset\dataset.py:501: FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method. The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy. For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object. df[field].fillna(value="", inplace=True) C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\data\dataset\dataset.py:1217: FutureWarning: using <built-in function len> in Series.agg cannot aggregate and has been deprecated. Use Series.transform to keep behavior unchanged. split_point = np.cumsum(feat[field].agg(len))[:-1] C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\data\dataset\dataset.py:648: FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method. The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy. For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object. feat[field].fillna(value=0, inplace=True) C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\data\dataset\dataset.py:650: FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method. The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy. For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object. feat[field].fillna(value=feat[field].mean(), inplace=True) 17 Dec 10:17 INFO ml-100k The number of users: 944 Average actions of users: 106.04453870625663 The number of items: 1683 Average actions of items: 59.45303210463734 The number of inters: 100000 The sparsity of the dataset: 93.70575143257098% Remain Fields: ['user_id', 'item_id', 'timestamp', 'age', 'gender', 'occupation', 'release_year', 'class', 'label'] 17 Dec 10:17 INFO [Training]: train_batch_size = [2048] train_neg_sample_args: [{'distribution': 'none', 'sample_num': 'none', 'alpha': 'none', 'dynamic': False, 'candidate_num': 0}] 17 Dec 10:17 INFO [Evaluation]: eval_batch_size = [4096] eval_args: [{'split': {'RS': [0.8, 0.1, 0.1]}, 'order': 'RO', 'group_by': None, 'mode': {'valid': 'labeled', 'test': 'labeled'}}] 17 Dec 10:17 INFO DeepFM( (token_embedding_table): FMEmbedding( (embedding): Embedding(2788, 32) ) (token_seq_embedding_table): ModuleList( (0): Embedding(20, 32) ) (first_order_linear): FMFirstOrderLinear( (token_embedding_table): FMEmbedding( (embedding): Embedding(2788, 1) ) (token_seq_embedding_table): ModuleList( (0): Embedding(20, 1) ) ) (fm): BaseFactorizationMachine() (mlp_layers): MLPLayers( (mlp_layers): Sequential( (0): Dropout(p=0.1, inplace=False) (1): Linear(in_features=224, out_features=128, bias=True) (2): ReLU() (3): Dropout(p=0.1, inplace=False) (4): Linear(in_features=128, out_features=128, bias=True) (5): ReLU() (6): Dropout(p=0.1, inplace=False) (7): Linear(in_features=128, out_features=128, bias=True) (8): ReLU() ) ) (deep_predict_layer): Linear(in_features=128, out_features=1, bias=True) (sigmoid): Sigmoid() (loss): BCEWithLogitsLoss() ) Trainable parameters: 154618 17 Dec 10:17 INFO FLOPs: 61964.0 Train 0: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 0: 100%|██████████████████████████| 40/40 [00:01<00:00, 21.81it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 0 training [time: 1.84s, train loss: 26.3781] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 40.77it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 0 evaluating [time: 0.09s, valid_score: 0.737700] 17 Dec 10:17 INFO valid result: auc : 0.7377 logloss : 0.5977 17 Dec 10:17 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 1: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 1: 100%|██████████████████████████| 40/40 [00:01<00:00, 31.97it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 1 training [time: 1.25s, train loss: 22.8683] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 35.89it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 1 evaluating [time: 0.09s, valid_score: 0.771900] 17 Dec 10:17 INFO valid result: auc : 0.7719 logloss : 0.5691 17 Dec 10:17 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 2: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 2: 100%|██████████████████████████| 40/40 [00:01<00:00, 32.42it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 2 training [time: 1.23s, train loss: 21.9918] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 34.00it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 2 evaluating [time: 0.10s, valid_score: 0.775600] 17 Dec 10:17 INFO valid result: auc : 0.7756 logloss : 0.5639 17 Dec 10:17 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 3: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 3: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.51it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 3 training [time: 1.31s, train loss: 21.7724] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 39.55it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 3 evaluating [time: 0.08s, valid_score: 0.777200] 17 Dec 10:17 INFO valid result: auc : 0.7772 logloss : 0.5626 17 Dec 10:17 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 4: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 4: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.77it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 4 training [time: 1.30s, train loss: 21.5593] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 40.46it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 4 evaluating [time: 0.08s, valid_score: 0.778700] 17 Dec 10:17 INFO valid result: auc : 0.7787 logloss : 0.5621 17 Dec 10:17 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 5: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 5: 100%|██████████████████████████| 40/40 [00:01<00:00, 29.79it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 5 training [time: 1.34s, train loss: 21.3957] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 39.69it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:17 INFO epoch 5 evaluating [time: 0.08s, valid_score: 0.780200] 17 Dec 10:17 INFO valid result: auc : 0.7802 logloss : 0.5597 17 Dec 10:17 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 6: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 6: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.85it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 6 training [time: 1.30s, train loss: 20.9730] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 40.77it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 6 evaluating [time: 0.08s, valid_score: 0.781200] 17 Dec 10:18 INFO valid result: auc : 0.7812 logloss : 0.5603 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 7: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 7: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.89it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 7 training [time: 1.30s, train loss: 20.7414] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 41.91it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 7 evaluating [time: 0.08s, valid_score: 0.783900] 17 Dec 10:18 INFO valid result: auc : 0.7839 logloss : 0.5607 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 8: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 8: 100%|██████████████████████████| 40/40 [00:01<00:00, 31.58it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 8 training [time: 1.27s, train loss: 20.4771] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 36.69it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 8 evaluating [time: 0.09s, valid_score: 0.785200] 17 Dec 10:18 INFO valid result: auc : 0.7852 logloss : 0.5588 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 9: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 9: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.17it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 9 training [time: 1.33s, train loss: 20.2486] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 37.23it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 9 evaluating [time: 0.09s, valid_score: 0.787100] 17 Dec 10:18 INFO valid result: auc : 0.7871 logloss : 0.5611 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 10: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 10: 100%|██████████████████████████| 40/40 [00:01<00:00, 31.66it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 10 training [time: 1.27s, train loss: 19.9889] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 36.32it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 10 evaluating [time: 0.09s, valid_score: 0.788000] 17 Dec 10:18 INFO valid result: auc : 0.788 logloss : 0.5588 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 11: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 11: 100%|██████████████████████████| 40/40 [00:01<00:00, 24.90it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 11 training [time: 1.61s, train loss: 19.7072] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 27.64it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 11 evaluating [time: 0.12s, valid_score: 0.789100] 17 Dec 10:18 INFO valid result: auc : 0.7891 logloss : 0.5617 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 12: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 12: 100%|██████████████████████████| 40/40 [00:01<00:00, 26.04it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 12 training [time: 1.54s, train loss: 19.4520] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 33.89it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 12 evaluating [time: 0.10s, valid_score: 0.790000] 17 Dec 10:18 INFO valid result: auc : 0.79 logloss : 0.5581 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 13: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 13: 100%|██████████████████████████| 40/40 [00:01<00:00, 29.97it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 13 training [time: 1.34s, train loss: 19.2969] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 36.50it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 13 evaluating [time: 0.09s, valid_score: 0.791400] 17 Dec 10:18 INFO valid result: auc : 0.7914 logloss : 0.5581 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 14: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 14: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.51it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 14 training [time: 1.31s, train loss: 19.0033] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 35.45it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 14 evaluating [time: 0.09s, valid_score: 0.791500] 17 Dec 10:18 INFO valid result: auc : 0.7915 logloss : 0.561 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 15: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 15: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.45it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 15 training [time: 1.32s, train loss: 18.7053] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 35.12it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 15 evaluating [time: 0.09s, valid_score: 0.793200] 17 Dec 10:18 INFO valid result: auc : 0.7932 logloss : 0.5667 17 Dec 10:18 INFO Saving current: saved\DeepFM-Dec-17-2025_10-17-44.pth Train 16: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 16: 100%|██████████████████████████| 40/40 [00:01<00:00, 31.18it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 16 training [time: 1.28s, train loss: 18.4229] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 34.65it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 16 evaluating [time: 0.10s, valid_score: 0.791300] 17 Dec 10:18 INFO valid result: auc : 0.7913 logloss : 0.5645 Train 17: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 17: 100%|██████████████████████████| 40/40 [00:01<00:00, 29.71it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 17 training [time: 1.35s, train loss: 18.1685] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 35.82it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 17 evaluating [time: 0.09s, valid_score: 0.791200] 17 Dec 10:18 INFO valid result: auc : 0.7912 logloss : 0.5736 Train 18: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 18: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.61it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 18 training [time: 1.31s, train loss: 17.8669] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 38.32it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 18 evaluating [time: 0.09s, valid_score: 0.788900] 17 Dec 10:18 INFO valid result: auc : 0.7889 logloss : 0.5763 Train 19: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 19: 100%|██████████████████████████| 40/40 [00:01<00:00, 29.01it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 19 training [time: 1.38s, train loss: 17.6062] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 37.43it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 19 evaluating [time: 0.09s, valid_score: 0.789100] 17 Dec 10:18 INFO valid result: auc : 0.7891 logloss : 0.5865 Train 20: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 20: 100%|██████████████████████████| 40/40 [00:01<00:00, 29.06it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 20 training [time: 1.38s, train loss: 17.3334] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 32.66it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 20 evaluating [time: 0.10s, valid_score: 0.786900] 17 Dec 10:18 INFO valid result: auc : 0.7869 logloss : 0.5885 Train 21: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 21: 100%|██████████████████████████| 40/40 [00:01<00:00, 27.89it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 21 training [time: 1.44s, train loss: 17.0732] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 32.78it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 21 evaluating [time: 0.10s, valid_score: 0.785500] 17 Dec 10:18 INFO valid result: auc : 0.7855 logloss : 0.5946 Train 22: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 22: 100%|██████████████████████████| 40/40 [00:01<00:00, 27.73it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 22 training [time: 1.44s, train loss: 16.7729] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 24.63it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 22 evaluating [time: 0.14s, valid_score: 0.784300] 17 Dec 10:18 INFO valid result: auc : 0.7843 logloss : 0.6011 Train 23: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 23: 100%|██████████████████████████| 40/40 [00:01<00:00, 23.59it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 23 training [time: 1.70s, train loss: 16.5185] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 33.71it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 23 evaluating [time: 0.10s, valid_score: 0.783100] 17 Dec 10:18 INFO valid result: auc : 0.7831 logloss : 0.6128 Train 24: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 24: 100%|██████████████████████████| 40/40 [00:01<00:00, 28.51it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 24 training [time: 1.40s, train loss: 16.0847] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 37.70it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 24 evaluating [time: 0.09s, valid_score: 0.783400] 17 Dec 10:18 INFO valid result: auc : 0.7834 logloss : 0.6102 Train 25: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 25: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.56it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 25 training [time: 1.31s, train loss: 15.8857] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 33.61it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 25 evaluating [time: 0.10s, valid_score: 0.780200] 17 Dec 10:18 INFO valid result: auc : 0.7802 logloss : 0.6161 Train 26: 0%| | 0/40 [00:00<?, ?it/s]C:\Users\李超洋\Desktop\项目\RecBole-master\RecBole-master\recbole\trainer\trainer.py:235: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. scaler = amp.GradScaler(enabled=self.enable_scaler) Train 26: 100%|██████████████████████████| 40/40 [00:01<00:00, 30.24it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 26 training [time: 1.33s, train loss: 15.6743] Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 32.43it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO epoch 26 evaluating [time: 0.10s, valid_score: 0.780700] 17 Dec 10:18 INFO valid result: auc : 0.7807 logloss : 0.6301 17 Dec 10:18 INFO Finished training, best eval result in epoch 15 17 Dec 10:18 INFO Loading model structure and parameters from saved\DeepFM-Dec-17-2025_10-17-44.pth Evaluate : 100%|████████████████████████████| 3/3 [00:00<00:00, 35.29it/s, GPU RAM: 0.05 G/6.00 G] 17 Dec 10:18 INFO The running environment of this training is as follows: +-------------+----------------+ | Environment | Usage | +=============+================+ | CPU | 9.00 % | +-------------+----------------+ | GPU | 0.05 G/6.00 G | +-------------+----------------+ | Memory | 1.25 G/15.40 G | +-------------+----------------+ 17 Dec 10:18 INFO best valid : OrderedDict([('auc', 0.7932), ('logloss', 0.5667)]) 17 Dec 10:18 INFO test result: OrderedDict([('auc', 0.784), ('logloss', 0.577)])
最新发布
12-18
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值