functools下的@lru_cache
今天看别人的代码,发现有个请求url的代码放了个装饰器
@lru_cache()
def code_id_map_em() -> dict:
pass
def lru_cache(maxsize=128, typed=False):
"""Least-recently-used cache decorator.
If *maxsize* is set to None, the LRU features are disabled and the cache
can grow without bound.
If *typed* is True, arguments of different types will be cached separately.
For example, f(3.0) and f(3) will be treated as distinct calls with
distinct results.
Arguments to the cached function must be hashable.
View the cache statistics named tuple (hits, misses, maxsize, currsize)
with f.cache_info(). Clear the cache and statistics with f.cache_clear().
Access the underlying function with f.__wrapped__.
See: https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)
"""
# Users should only access the lru_cache through its public API:
# cache_info, cache_clear, and f.__wrapped__
# The internals of the lru_cache are encapsulated for thread safety and
# to allow the implementation to change (including a possible C version).
if isinstance(maxsize, int):
# Negative maxsize is treated as 0
if maxsize < 0:
maxsize = 0
elif callable(maxsize) and isinstance(typed, bool):
# The user_function was passed in directly via the maxsize argument
user_function, maxsize = maxsize, 128
wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed}
return update_wrapper(wrapper, user_function)
elif maxsize is not None:
raise TypeError(
'Expected first argument to be an integer, a callable, or None')
def decorating_function(user_function):
wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed}
return update_wrapper(wrapper, user_function)
return decorating_function
@lru_cache
是 Python 的 functools 模块提供的一个装饰器,用于实现最常用的缓存机制(Least Recently Used Cache)。它可以帮助你缓存函数的返回值,当函数被多次调用时,如果参数相同,则直接返回之前缓存的结果,而不需要重新执行函数。
具体到你的代码中:
@lru_cache()
装饰了 code_id_map_em
函数。
这意味着当 code_id_map_em
函数被调用时,如果传入的参数(在这个例子中没有参数)相同,那么函数不会重复执行网络请求,而是直接返回之前缓存的结果。
使用 @lru_cache
的好处包括:
提高性能:减少重复计算或网络请求,特别是在频繁调用且输入参数有限的情况下。
节省资源:减少对外部服务(如数据库、API)的调用次数,降低负载。
在你的场景中,code_id_map_em
函数用于获取股票和市场代码的映射关系,这个映射关系通常不会频繁变化,因此使用 @lru_cache
可以有效减少不必要的网络请求