[Python for Data Analysis] Python Basic--Function

本文深入探讨了Python函数的基本概念、参数类型、作用域、闭包、lambda函数及高级特性,包括多重返回值、使用*args和**kwargs、部分参数应用等。同时涉及生成器和迭代器模块的应用,以及文件与操作系统交互的基础知识。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

argument

可以是scalar type, 也可以是list等data structure,可以是function
object都可以

Positional arguments and keywords argument

In [3]:

def my_function(x,y,z = 1.5):
    if z>1:
        return z*(x+y)
    else:
        return z/(x+y)
my_function(5,6,z = 0.7),my_function(3.14,7,3.5)
Out[3]:
(0.06363636363636363, 35.49)

Namespace, Scope, Local Functions

没有太多意思, 注意变量的存在范围, 以及在函数内部也可以定义函数
In [4]:

returning Multiple Values

def f():
    a = 5
    b = 6
    c = 7
    return a,b,c #returning a tuple
a,b,c = f()
print a,b,c
5 6 7

function are objects

we can use get a collection of fuctions using their names, they can be used as arguments
b = foo is an assignment

In [6]:

def addOne(x):
    return x+1
def multiTwo(x):
    return x*2
def minusTwo(x):
    return x-2
def devTwo(x):
    return x/2
def clear(x,ops):
    for function in ops:
        x = function(x)
    return x
clear(5,[addOne,multiTwo,minusTwo,devTwo])
Out[6]:
5

lambda functions

关键词,lambda

In [9]:

#if we want to sort string using the number of unique characters
strings = ['abc','aaab','aaaaa']
strings.sort(key = lambda x: len(set(list(x))))
strings
Out[9]:
['aaaaa', 'aaab', 'abc']

Closures: Functions that Return Functions

暂时用不到

Extended Call Synatx with *args, **kw args

暂时用不到

Currying: Partial Argument Application

可以简化给定的函数

#在pandas处理时可能用到
#ma60 = lambda x:pandas.rolling_mean(x,60)
#data.apply(m60)

Generators

Itertools module

Files and Operating system

通常情况下, 用pandas.read_csv() to read data files into Python Data Structure

### MCMC Metropolis-Hastings Python Code Implementation The following section provides a detailed explanation along with a practical example of implementing the Metropolis-Hastings algorithm, which falls under Markov Chain Monte Carlo methods used extensively in Bayesian statistics and computational physics. #### Understanding the Algorithm Metropolis-Hastings allows sampling from probability distributions when direct sampling is not feasible. The process involves proposing new states based on current ones and accepting or rejecting these proposals according to certain rules designed to ensure convergence towards the target distribution over time[^4]. #### Practical Example Using PyMC Library To implement this method efficiently within Python, one can utilize libraries like `pymc`, specifically tailored for probabilistic programming tasks including those involving complex statistical modeling scenarios where traditional approaches might fall short due to their inherent limitations regarding flexibility and scalability requirements imposed by modern datasets sizes and structures encountered across various domains such as machine learning applications among others[^1]: ```python import pymc as pm import numpy as np # Define model parameters mu_true = 5 sigma_true = 2 with pm.Model() as model: mu = pm.Normal('mu', mu=0, sigma=10) # Prior for mean parameter y_observed = pm.Normal( 'y_observed', mu=mu, sigma=sigma_true, observed=np.random.normal(loc=mu_true, scale=sigma_true, size=100)) # Likelihood function using synthetic data points generated around true value trace = pm.sample(1000, tune=500) pm.plot_posterior(trace); ``` This script demonstrates how easily one could define priors alongside likelihood functions while leveraging built-in samplers provided out-of-the-box through high-level abstractions offered by specialized packages aimed at simplifying otherwise intricate procedures involved during inference stages associated particularly with hierarchical models characterized often times by multiple layers interdependencies between variables leading ultimately toward more robust solutions compared against classical alternatives available traditionally within standard scientific computing environments lacking support necessary features required handling uncertainty quantification aspects effectively throughout entire analysis pipelines starting from initial problem formulation all way up until final results interpretation phases inclusive. For users preferring manual implementations without relying upon external dependencies beyond core language constructs alone: ```python def metropolis_hastings(target_distribution, proposal_function, n_samples): samples = [] x_current = 0 # Initial state; may be set differently depending on context. for _ in range(n_samples): x_proposed = proposal_function(x_current) acceptance_ratio = min(1., ( target_distribution(x_proposed) / target_distribution(x_current))) if np.random.rand() < acceptance_ratio: x_current = x_proposed samples.append(x_current) return np.array(samples) def normal_pdf(x, mu=0, sigma=1): """A simple Gaussian PDF.""" coefficient = 1/(sigma * np.sqrt(2*np.pi)) exponent = -(pow((x-mu)/sigma, 2)/2.) return coefficient * pow(np.e, exponent) proposals = lambda x: x + np.random.randn() samples = metropolis_hastings(normal_pdf, proposals, 10_000) print(f"Sampled values:\n{samples}") ``` In this standalone version, note that both the target density (`normal_pdf`) and transition kernel (`proposals`) must explicitly defined per specific use case needs prior execution since no assumptions made concerning underlying structure apart what's directly coded inside respective definitions themselves here presented merely illustrative purposes rather than ready-to-deploy production-grade software components intended real-world deployments requiring rigorous testing cycles beforehand deployment into live systems subject stringent quality assurance standards typically enforced enterprise settings especially financial services sector healthcare industry etcetera. --related questions-- 1. How does the choice of proposal distribution affect the efficiency of the Metropolis-Hastings sampler? 2. What are some common challenges faced when applying MCMC techniques to large-scale problems? 3. Can you explain why burn-in periods are important in MCMC simulations? 4. In what ways do adaptive MCMC algorithms improve upon basic Metropolis-Hastings schemes?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值