python logging之multi-module

本文详细介绍了Python的日志模块logging的使用方法,包括如何在一个模块中定义和配置logger,以及如何在不同模块间共享相同的logger及其配置。通过实例展示了如何在主模块中设置logger,并在其他模块中使用其子logger进行日志记录。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在同一个Python的解释器进程里面,所有的对logging.getLogger(‘someLogger’)的调用都会返回同一个对象.这个规则不仅仅在同一个module有效,而且对在在同一个Python的解释器进程里面的多个module也有效.

而且,应用代码可以在一个module里面定义一个父logger,而在另一个module里面继承这个logger,所有对这个子logger的调用都会转到父logger里面去。


下面例子, 这个是主模块python_log_multimodule1.py的代码:

#coding=utf-8
import logging

import python_log_multimodule2
from python_log_multimodule2 import SubModule

# create logger with "sp_app"  创建
logger = logging.getLogger("sp_app")
logger.setLevel(logging.DEBUG)

# create file handler which logs even debug messages
fh = logging.FileHandler("spam.log")
fh.setLevel(logging.DEBUG)

# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)

# create formatter and add it to the handlers
formatter = logging.Formatter(
    "%(asctime)s - %(name)s - %(levelname)s - %(thread)d - %(message)s")
fh.setFormatter(formatter)
ch.setFormatter(formatter)

# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
logger.info("creating an instance of subModule.SubModule")
a = SubModule()

logger.info("created an instance of subModule.SubModule")
logger.info("calling subModule.SubModule.do_something")
a.do_something()

logger.info("finished subModule.SubModule.do_something")
logger.info("calling subModule.some_function()")
python_log_multimodule2.some_function()

logger.info("done with subModule.some_function()")

这个是子模块的代码,

#coding=utf-8

import time
import logging  

# create logger  
module_logger = logging.getLogger("sp_app.subModule")  

class SubModule:  
    def __init__(self):  
        self.logger = logging.getLogger("sp_app.subModule.SubModule")
        self.logger.info("creating an instance of subModule")

    def do_something(self):  
        self.logger.info("start doing-something.")  
        a = 1 + 1  
        self.logger.info("end doing-something.")  

    def doWork(self):
        self.logger.info("start working.")
        time.sleep(0.1)
        self.logger.info("end working.")
def some_function():
    module_logger.info("received a call to \"some_function\" ")  

可以看到, 我们在主模块里面定义了一个logger ‘sp_app’, 并对他进行了配置.那么在这个解释器进程里面的任何地方去通过getLogger(‘sp_app’)得到的对象都是一样的, 不需要从新定义配置, 可以直接使用.

更方便的是, 你定义任意该logger的子logger, 都可以共享父logger的定义和配置.所谓的父子logger只是简单的通过命名来识别, 任意以’sp_app.’开头的logger都是他的子logger, 例如’sp_app.subModule’

这个在实际的开发中, 还是很方便的, 对于一个application,首先通过logging配置文件编写好这个application所对应的log策略, 可以只生成一个根logger, 比如叫’AppName’,然后在Main函数里面, 通过fileConfig加载logging的配置.接着在appliction的任意地方, 不同的模块中, 可以使用AppName的子logger, 如AppName.Util, AppName.Core, 来进行log, 并且不需要反复的定义和配置各个logger.

查看日志记录:


2018-02-27 14:10:49,896 - sp_app - INFO - 140735520973632 - creating an instance of subModule.SubModule
2018-02-27 14:10:49,896 - sp_app.subModule.SubModule - INFO - 140735520973632 - creating an instance of subModule
2018-02-27 14:10:49,897 - sp_app - INFO - 140735520973632 - created an instance of subModule.SubModule
2018-02-27 14:10:49,897 - sp_app - INFO - 140735520973632 - calling subModule.SubModule.do_something
2018-02-27 14:10:49,897 - sp_app.subModule.SubModule - INFO - 140735520973632 - start doing-something.
2018-02-27 14:10:49,897 - sp_app.subModule.SubModule - INFO - 140735520973632 - end doing-something.
2018-02-27 14:10:49,897 - sp_app - INFO - 140735520973632 - finished subModule.SubModule.do_something
2018-02-27 14:10:49,897 - sp_app - INFO - 140735520973632 - calling subModule.some_function()
2018-02-27 14:10:49,897 - sp_app.subModule - INFO - 140735520973632 - received a call to "some_function" 
2018-02-27 14:10:49,897 - sp_app - INFO - 140735520973632 - done with subModule.some_function()
PS D:\python\yolov5-master\yolov5-master> & D:/python/yolov5-master/anaconda/python.exe d:/python/yolov5-master/yolov5-master/train.py train: weights=yolov5n.pt, cfg=, data=data\wang_parameter.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=300, batch_size=8, imgsz=640, rect=False, resume=True, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=0, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False github: skipping check (not a git repository), for updates see https://github.com/ultralytics/yolov5 YOLOv5 2025-6-12 Python-3.13.2 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4060 Laptop GPU, 8188MiB) hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0 TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/ COMET WARNING: Comet credentials have not been set. Comet will default to offline logging. Please set your credentials to enable online logging. COMET INFO: Using 'D:\\python\\yolov5-master\\yolov5-master\\.cometml-runs' path as offline directory. Pass 'offline_directory' parameter into constructor or set the 'COMET_OFFLINE_DIRECTORY' environment variable to manually choose where to store offline experiment archives. Dataset not found , missing paths ['D:\\python\\yolov5-master\\yolov5-master\\VDCData\\images\\val.txt'] Traceback (most recent call last): File "d:\python\yolov5-master\yolov5-master\train.py", line 984, in <module> main(opt) ~~~~^^^^^ File "d:\python\yolov5-master\yolov5-master\train.py", line 686, in main train(opt.hyp, opt, device, callbacks) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\python\yolov5-master\yolov5-master\train.py", line 180, in train loggers = Loggers( save_dir=save_dir, ...<4 lines>... include=tuple(include_loggers), ) File "d:\python\yolov5-master\yolov5-master\utils\loggers\__init__.py", line 153, in __init__ self.comet_logger = CometLogger(self.opt, self.hyp) ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File "d:\python\yolov5-master\yolov5-master\utils\loggers\comet\__init__.py", line 100, in __init__ self.data_dict = self.check_dataset(self.opt.data) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^ File "d:\python\yolov5-master\yolov5-master\utils\loggers\comet\__init__.py", line 259, in check_dataset return check_dataset(data_file) File "d:\python\yolov5-master\yolov5-master\utils\general.py", line 564, in check_dataset raise Exception("Dataset not found ❌") Exception: Dataset not found ❌ COMET INFO: The process of logging environment details (conda environment, git patch) is underway. Please be patient as this may take some time. COMET INFO: --------------------------------------------------------------------------------------- COMET INFO: Comet.ml OfflineExperiment Summary COMET INFO: --------------------------------------------------------------------------------------- COMET INFO: Data: COMET INFO: display_summary_level : 1 COMET INFO: name : exp COMET INFO: url : [OfflineExperiment will get URL after upload] COMET INFO: Others: COMET INFO: Name : exp COMET INFO: offline_experiment : True COMET INFO: Uploads: COMET INFO: asset : 1 (622 bytes) COMET INFO: conda-environment-definition : 1 COMET INFO: conda-info : 1 COMET INFO: conda-specification : 1 COMET INFO: environment details : 1 COMET INFO: git metadata : 1 COMET INFO: installed packages : 1 COMET INFO: COMET INFO: Still saving offline stats to messages file before program termination (may take up to 120 seconds) COMET INFO: Begin archiving the offline data. COMET INFO: To upload this offline experiment, run: comet upload D:\python\yolov5-master\yolov5-master\.cometml-runs\59685d4d63e84a489d90ff7bc710a342.zip PS D:\python\yolov5-master\yolov5-master>
07-17
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_3260\3957571541.py in <module> 23 24 # 调用时指定批次大小 ---> 25 show_saliency_maps(X, y, batch_size=2) # 从2开始逐步增加 ~\AppData\Local\Temp\ipykernel_3260\3957571541.py in show_saliency_maps(X, y, batch_size) 8 9 with torch.no_grad(): # 禁用梯度计算 ---> 10 saliency = compute_saliency_maps(X_tensor, y_tensor, model).cpu().numpy() 11 12 plt.figure(figsize=(12, 5)) # 为每批创建新图 E:\下载\cs231n.github.io-master\cs231n.github.io-master\assignments\2021\assignment3_colab\assignment3\cs231n\net_visualization_pytorch.py in compute_saliency_maps(X, y, model) 42 loss = correct_scores.sum() 43 # 反向传播 ---> 44 loss.backward() 45 # 求三个通道的梯度的绝对值的最大值 46 saliency, _ = torch.max(torch.abs(X.grad.data), dim=1) ~\AppData\Roaming\Python\Python39\site-packages\torch\_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs) 646 inputs=inputs, 647 ) --> 648 torch.autograd.backward( 649 self, gradient, retain_graph, create_graph, inputs=inputs 650 ) ~\AppData\Roaming\Python\Python39\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 351 # some Python versions print out the first line of a multi-line function 352 # calls in the traceback and some print out the last line --> 353 _engine_run_backward( 354 tensors, 355 grad_tensors_, ~\AppData\Roaming\Python\Python39\site-packages\torch\autograd\graph.py in _engine_run_backward(t_outputs, *args, **kwargs) 822 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs) 823 try: --> 824 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 825 t_outputs, *args, **kwargs 826 ) # Calls into the C++ engine to run the backward pass RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
最新发布
07-18
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值