builtin descriptor types

inspect模块使用详解
本文详细介绍了Python inspect模块中的getattr_static函数用法。该函数能在不触发动态查找的情况下获取对象属性,避免了通过描述符协议、__getattr__或__getattribute__进行查找。文章还讨论了该函数的一些限制情况及如何处理内置描述符类型的示例代码。
inspect. getattr_static ( objattrdefault=None )

Retrieve attributes without triggering dynamic lookup via the descriptor protocol, __getattr__() or__getattribute__().

Note: this function may not be able to retrieve all attributes that getattr can fetch (like dynamically created attributes) and may find attributes that getattr can’t (like descriptors that raise AttributeError). It can also return descriptors objects instead of instance members.

If the instance __dict__ is shadowed by another member (for example a property) then this function will be unable to find instance members.

New in version 3.2.

getattr_static() does not resolve descriptors, for example slot descriptors or getset descriptors on objects implemented in C. The descriptor object is returned instead of the underlying attribute.

You can handle these with code like the following. Note that for arbitrary getset descriptors invoking these may trigger code execution:

# example code for resolving the builtin descriptor types
class _foo:
    __slots__ = ['foo']
slot_descriptor = type(_foo.foo)
getset_descriptor = type(type(open(__file__)).name)
wrapper_descriptor = type(str.__dict__['__add__'])
descriptor_types = (slot_descriptor, getset_descriptor, wrapper_descriptor)
result = getattr_static(some_object, 'foo')
if type(result) in descriptor_types:
    try:
        result = result.__get__()
    except AttributeError:
        # descriptors can raise AttributeError to
        # indicate there is no underlying value
        # in which case the descriptor itself will
        # have to do
        pass
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[3], line 8 6 import gc 7 import seaborn as sns ----> 8 import tensorflow 9 from tensorflow.keras.optimizers import Adam 10 from tensorflow.keras import backend as K File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\__init__.py:41 38 import six as _six 39 import sys as _sys ---> 41 from tensorflow.python.tools import module_util as _module_util 42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader 44 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import. File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\python\__init__.py:40 31 import traceback 33 # We aim to keep this file minimal and ideally remove completely. 34 # If you are adding a new file with @tf_export decorators, 35 # import it in modules_with_exports.py instead. 36 37 # go/tf-wildcard-import 38 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top ---> 40 from tensorflow.python.eager import context 41 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow 43 # pylint: enable=wildcard-import 44 45 # Bring in subpackages. File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\python\eager\context.py:32 29 import numpy as np 30 import six ---> 32 from tensorflow.core.framework import function_pb2 33 from tensorflow.core.protobuf import config_pb2 34 from tensorflow.core.protobuf import rewriter_config_pb2 File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\core\framework\function_pb2.py:16 11 # @@protoc_insertion_point(imports) 13 _sym_db = _symbol_database.Default() ---> 16 from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2 17 from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2 18 from tensorflow.core.framework import op_def_pb2 as tensorflow_dot_core_dot_framework_dot_op__def__pb2 File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py:16 11 # @@protoc_insertion_point(imports) 13 _sym_db = _symbol_database.Default() ---> 16 from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2 17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2 18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2 File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\core\framework\tensor_pb2.py:16 11 # @@protoc_insertion_point(imports) 13 _sym_db = _symbol_database.Default() ---> 16 from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2 17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2 18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2 File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py:16 11 # @@protoc_insertion_point(imports) 13 _sym_db = _symbol_database.Default() ---> 16 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2 17 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2 20 DESCRIPTOR = _descriptor.FileDescriptor( 21 name='tensorflow/core/framework/resource_handle.proto', 22 package='tensorflow', (...) 26 , 27 dependencies=[tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2.DESCRIPTOR,tensorflow_dot_core_dot_framework_dot_types__pb2.DESCRIPTOR,]) File ~\.conda\envs\tfgpu\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py:36 13 _sym_db = _symbol_database.Default() 18 DESCRIPTOR = _descriptor.FileDescriptor( 19 name='tensorflow/core/framework/tensor_shape.proto', 20 package='tensorflow', (...) 23 serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"z\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x12\x14\n\x0cunknown_rank\x18\x03 \x01(\x08\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tB\x87\x01\n\x18org.tensorflow.frameworkB\x11TensorShapeProtosP\x01ZSgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/tensor_shape_go_proto\xf8\x01\x01\x62\x06proto3') 24 ) 29 _TENSORSHAPEPROTO_DIM = _descriptor.Descriptor( 30 name='Dim', 31 full_name='tensorflow.TensorShapeProto.Dim', 32 filename=None, 33 file=DESCRIPTOR, 34 containing_type=None, 35 fields=[ ---> 36 _descriptor.FieldDescriptor( 37 name='size', full_name='tensorflow.TensorShapeProto.Dim.size', index=0, 38 number=1, type=3, cpp_type=2, label=1, 39 has_default_value=False, default_value=0, 40 message_type=None, enum_type=None, containing_type=None, 41 is_extension=False, extension_scope=None, 42 serialized_options=None, file=DESCRIPTOR), 43 _descriptor.FieldDescriptor( 44 name='name', full_name='tensorflow.TensorShapeProto.Dim.name', index=1, 45 number=2, type=9, cpp_type=9, label=1, 46 has_default_value=False, default_value=_b("").decode('utf-8'), 47 message_type=None, enum_type=None, containing_type=None, 48 is_extension=False, extension_scope=None, 49 serialized_options=None, file=DESCRIPTOR), 50 ], 51 extensions=[ 52 ], 53 nested_types=[], 54 enum_types=[ 55 ], 56 serialized_options=None, 57 is_extendable=False, 58 syntax='proto3', 59 extension_ranges=[], 60 oneofs=[ 61 ], 62 serialized_start=149, 63 serialized_end=182, 64 ) 66 _TENSORSHAPEPROTO = _descriptor.Descriptor( 67 name='TensorShapeProto', 68 full_name='tensorflow.TensorShapeProto', (...) 100 serialized_end=182, 101 ) 103 _TENSORSHAPEPROTO_DIM.containing_type = _TENSORSHAPEPROTO File ~\.conda\envs\tfgpu\lib\site-packages\google\protobuf\descriptor.py:621, in FieldDescriptor.__new__(cls, name, full_name, index, number, type, cpp_type, label, default_value, message_type, enum_type, containing_type, is_extension, extension_scope, options, serialized_options, has_default_value, containing_oneof, json_name, file, create_key) 615 def __new__(cls, name, full_name, index, number, type, cpp_type, label, 616 default_value, message_type, enum_type, containing_type, 617 is_extension, extension_scope, options=None, 618 serialized_options=None, 619 has_default_value=True, containing_oneof=None, json_name=None, 620 file=None, create_key=None): # pylint: disable=redefined-builtin --> 621 _message.Message._CheckCalledFromGeneratedFile() 622 if is_extension: 623 return _message.default_pool.FindExtensionByName(full_name) TypeError: Descriptors cannot be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
06-25
/** Try to drop cache of a currently opened file during read/write procedure to avoid Cached memory exhausting MemFree. @fd - file descriptor of the file being read or written. @offset - target dropping file content starting offset. @len - target dropping file content length. @sync - whether to sync file data before dropping, required by writters. @len is allowed to be smaller than 0, so r/w error return value can be ignored. @sync is necessary if fd is being written, or unflushed dirty pages will NOT be successfully dropped by kernel. This API is supposed to be called RIGHT AFTER read or write, for some return value may have read or write part of intended length and verdict as error later. This part shall also be dropped. */ S32 drop_file_cache_range(int fd, S32 offset, S32 len, BOOL sync) { #if _XOPEN_SOURCE >= 600 || _POSIX_C_SOURCE >= 200112L /* @len is of off_t, a SIGNED type which is identical to long(which can be tested by __builtin_types_compatible_p(off_t, long)) */ if (fd < 0 || len <= 0) { return OK; } if (sync) { if (sync_file_cache(fd, offset, len) < 0) { NVMP_PRINT(“sync file cache failed: %s”, strerror(errno)); return ERROR; } } if (posix_fadvise(fd, offset, len, POSIX_FADV_DONTNEED) < 0) { NVMP_PRINT(“drop cache failed: %s”, strerror(errno)); return ERROR; } return OK; #else return OK; #endif } /** Try to sync dirty cache of currently opened file. @fd @offset @len is identical as with drop_file_cache_range(), however precise offset-length dropping depend on sync_file_range() syscall. This is supported by kernel since 2.6.17; however not supported by every cross-compile toolchain. An ‘undefined reference’ error may occur during linking if so. If not so, it’s recommended to enable SUPPORT_SYNC_FILE_RANGE for better efficiency. / S32 sync_file_cache(int fd, S32 offset, S32 len) { / 测试sync_file_range并没有效果,不建议开启该宏 / #ifdef SUPPORT_SYNC_FILE_RANGE #ifdef USE_SYNC_FILE_RANGE2_SYSCALL / Call sync_file_range() here, but the order of parameters is actually the same as sync_file_range2(). The sync_file_range() may have an error in the tool chains. */ return sync_file_range(fd, SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE | SYNC_FILE_RANGE_WAIT_AFTER, (off64_t)offset, (off64_t)len); #else return sync_file_range(fd, (off64_t)offset, (off64_t)len, SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE | SYNC_FILE_RANGE_WAIT_AFTER); #endif #else return fdatasync(fd); #endif } 以上函数实现的是是将磁盘文件内容先删除再写入吗,还是直接覆盖的
最新发布
11-27
/** * Try to drop cache of a currently opened file during read/write procedure to avoid * Cached memory exhausting MemFree. * @fd - file descriptor of the file being read or written. * @offset - target dropping file content starting offset. * @len - target dropping file content length. * @sync - whether to sync file data before dropping, required by writters. * * @len is allowed to be smaller than 0, so r/w error return value can be ignored. * @sync is necessary if fd is being written, or unflushed dirty pages will NOT be * successfully dropped by kernel. * * This API is supposed to be called RIGHT AFTER read or write, for some return value * may have read or write part of intended length and verdict as error later. This * part shall also be dropped. */ S32 drop_file_cache_range(int fd, S32 offset, S32 len, BOOL sync) { #if _XOPEN_SOURCE >= 600 || _POSIX_C_SOURCE >= 200112L /* @len is of off_t, a SIGNED type which is identical to long(which can * be tested by __builtin_types_compatible_p(off_t, long)) */ if (fd < 0 || len <= 0) { return OK; } if (sync) { if (sync_file_cache(fd, offset, len) < 0) { NVMP_PRINT("sync file cache failed: %s", strerror(errno)); return ERROR; } } if (posix_fadvise(fd, offset, len, POSIX_FADV_DONTNEED) < 0) { NVMP_PRINT("drop cache failed: %s", strerror(errno)); return ERROR; } return OK; #else return OK; #endif } /** * Try to sync dirty cache of currently opened file. * @fd @offset @len is identical as with drop_file_cache_range(), however precise * offset-length dropping depend on sync_file_range() syscall. This is supported * by kernel since 2.6.17; however not supported by every cross-compile toolchain. * An 'undefined reference' error may occur during linking if so. If not so, it's * recommended to enable SUPPORT_SYNC_FILE_RANGE for better efficiency. */ S32 sync_file_cache(int fd, S32 offset, S32 len) { /* 测试sync_file_range并没有效果,不建议开启该宏 */ #ifdef SUPPORT_SYNC_FILE_RANGE #ifdef USE_SYNC_FILE_RANGE2_SYSCALL /* * Call sync_file_range() here, but the order of parameters is actually the same as * sync_file_range2(). The sync_file_range() may have an error in the tool chains. */ return sync_file_range(fd, SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE | SYNC_FILE_RANGE_WAIT_AFTER, (off64_t)offset, (off64_t)len); #else return sync_file_range(fd, (off64_t)offset, (off64_t)len, SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE | SYNC_FILE_RANGE_WAIT_AFTER); #endif #else return fdatasync(fd); #endif } 分析一下以上函数的作用
11-27
(bevfusion) lyq@ps:/usr/local/cuda-11.8$ ls lib64 cmake libcufile.so.0 libnppial.so.11.8.0.86 libnppitc.so.11.8.0.86 cudnn_lib libcufile.so.1.4.0 libnppial_static.a libnppitc_static.a libaccinj64.so libcufile_static.a libnppicc.so libnpps.so libaccinj64.so.11.8 libcufilt.a libnppicc.so.11 libnpps.so.11 libaccinj64.so.11.8.87 libcuinj64.so libnppicc.so.11.8.0.86 libnpps.so.11.8.0.86 libcublasLt.so libcuinj64.so.11.8 libnppicc_static.a libnpps_static.a libcublasLt.so.11 libcuinj64.so.11.8.87 libnppidei.so libnvblas.so libcublasLt.so.11.11.3.6 libculibos.a libnppidei.so.11 libnvblas.so.11 libcublasLt_static.a libcurand.so libnppidei.so.11.8.0.86 libnvblas.so.11.11.3.6 libcublas.so libcurand.so.10 libnppidei_static.a libnvjpeg.so libcublas.so.11 libcurand.so.10.3.0.86 libnppif.so libnvjpeg.so.11 libcublas.so.11.11.3.6 libcurand_static.a libnppif.so.11 libnvjpeg.so.11.9.0.86 libcublas_static.a libcusolver_lapack_static.a libnppif.so.11.8.0.86 libnvjpeg_static.a libcudadevrt.a libcusolverMg.so libnppif_static.a libnvptxcompiler_static.a libcudart.so libcusolverMg.so.11 libnppig.so libnvrtc-builtins.so libcudart.so.11.0 libcusolverMg.so.11.4.1.48 libnppig.so.11 libnvrtc-builtins.so.11.8 libcudart.so.11.8.89 libcusolver.so libnppig.so.11.8.0.86 libnvrtc-builtins.so.11.8.89 libcudart_static.a libcusolver.so.11 libnppig_static.a libnvrtc-builtins_static.a libcufft.so libcusolver.so.11.4.1.48 libnppim.so libnvrtc.so libcufft.so.10 libcusolver_static.a libnppim.so.11 libnvrtc.so.11.2 libcufft.so.10.9.0.58 libcusparse.so libnppim.so.11.8.0.86 libnvrtc.so.11.8.89 libcufft_static.a libcusparse.so.11 libnppim_static.a libnvrtc_static.a libcufft_static_nocallback.a libcusparse.so.11.7.5.86 libnppist.so libnvToolsExt.so libcufftw.so libcusparse_static.a libnppist.so.11 libnvToolsExt.so.1 libcufftw.so.10 liblapack_static.a libnppist.so.11.8.0.86 libnvToolsExt.so.1.0.0 libcufftw.so.10.9.0.58 libmetis_static.a libnppist_static.a libOpenCL.so libcufftw_static.a libnppc.so libnppisu.so libOpenCL.so.1 libcufile_rdma.so libnppc.so.11 libnppisu.so.11 libOpenCL.so.1.0 libcufile_rdma.so.1 libnppc.so.11.8.0.86 libnppisu.so.11.8.0.86 libOpenCL.so.1.0.0 libcufile_rdma.so.1.4.0 libnppc_static.a libnppisu_static.a stubs libcufile_rdma_static.a libnppial.so libnppitc.so libcufile.so libnppial.so.11 libnppitc.so.11 (bevfusion) lyq@ps:/usr/local/cuda-11.8$ ls include builtin_types.h cufftXt.h nppi_morphological_operations.h channel_descriptor.h cufile.h nppi_statistics_functions.h CL curand_discrete2.h nppi_support_functions.h common_functions.h curand_discrete.h nppi_threshold_and_compare_operations.h cooperative_groups curand_globals.h npps_arithmetic_and_logical_operations.h cooperative_groups.h curand.h npps_conversion_functions.h crt curand_kernel.h npps_filtering_functions.h cub curand_lognormal.h npps.h cublas_api.h curand_mrg32k3a.h npps_initialization.h cublas.h curand_mtgp32dc_p_11213.h npps_statistics_functions.h cublasLt.h curand_mtgp32.h npps_support_functions.h cublas_v2.h curand_mtgp32_host.h nv cublasXt.h curand_mtgp32_kernel.h nvblas.h cuComplex.h curand_normal.h nv_decode.h cuda curand_normal_static.h nvfunctional cuda_awbarrier.h curand_philox4x32_x.h nvjpeg.h cuda_awbarrier_helpers.h curand_poisson.h nvml.h cuda_awbarrier_primitives.h curand_precalc.h nvPTXCompiler.h cuda_bf16.h curand_uniform.h nvrtc.h cuda_bf16.hpp cusolver_common.h nvToolsExtCuda.h cuda_device_runtime_api.h cusolverDn.h nvToolsExtCudaRt.h cudaEGL.h cusolverMg.h nvToolsExt.h cuda_egl_interop.h cusolverRf.h nvToolsExtOpenCL.h cudaEGLTypedefs.h cusolverSp.h nvToolsExtSync.h cuda_fp16.h cusolverSp_LOWLEVEL_PREVIEW.h nvtx3 cuda_fp16.hpp cusparse.h sm_20_atomic_functions.h cuda_fp8.h cusparse_v2.h sm_20_atomic_functions.hpp cuda_fp8.hpp device_atomic_functions.h sm_20_intrinsics.h cudaGL.h device_atomic_functions.hpp sm_20_intrinsics.hpp cuda_gl_interop.h device_double_functions.h sm_30_intrinsics.h cudaGLTypedefs.h device_functions.h sm_30_intrinsics.hpp cuda.h device_launch_parameters.h sm_32_atomic_functions.h cudalibxt.h device_types.h sm_32_atomic_functions.hpp cuda_occupancy.h driver_functions.h sm_32_intrinsics.h cuda_pipeline.h driver_types.h sm_32_intrinsics.hpp cuda_pipeline_helpers.h fatbinary_section.h sm_35_atomic_functions.h cuda_pipeline_primitives.h host_config.h sm_35_intrinsics.h cuda_profiler_api.h host_defines.h sm_60_atomic_functions.h cudaProfiler.h library_types.h sm_60_atomic_functions.hpp cudaProfilerTypedefs.h math_constants.h sm_61_intrinsics.h cudart_platform.h math_functions.h sm_61_intrinsics.hpp cuda_runtime_api.h mma.h surface_functions.h cuda_runtime.h nppcore.h surface_indirect_functions.h cuda_surface_types.h nppdefs.h surface_types.h cuda_texture_types.h npp.h texture_fetch_functions.h cudaTypedefs.h nppi_arithmetic_and_logical_operations.h texture_indirect_functions.h cudaVDPAU.h nppi_color_conversion.h texture_types.h cuda_vdpau_interop.h nppi_data_exchange_and_initialization.h thrust cudaVDPAUTypedefs.h nppi_filtering_functions.h vector_functions.h cudnn_include nppi_geometry_transforms.h vector_functions.hpp cufft.h nppi.h vector_types.h cufftw.h nppi_linear_transforms.h (bevfusion) lyq@ps:/usr/local/cuda-11.8$ 一共有这么多文件怎么处理,怎么执行命令
11-12
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值