Computing PI using Monte Carlo method

本文介绍了一种利用Python编程实现蒙特卡洛方法来估算圆周率π的算法。通过生成随机点并计算它们与单位圆的关系,我们可以估计π的值。程序包括点的随机生成、距离单位圆中心的计算以及最终π值的计算输出。
import math
import random

class Point():
    def __init__(self, x = 0, y = 0):
        self.x = x
        self.y = y

    def set_rand_range(self, rand_start, rand_end):
        self.rand_start = rand_start
        self.rand_end = rand_end

    def get_random(self):
        r = random.uniform(self.rand_start, self.rand_end)
        return r

    def set_x(self):
        self.x = self.get_random()

    def get_x(self):
        return self.x

    def set_y(self):
        self.y = self.get_random()

    def get_y(self):
        return self.y

def get_randpoint():
    p = Point()
    p.set_rand_range(0, 1)
    p.set_x()
    p.set_y()
    return p.get_x(), p.get_y()

def compute_pi(n):
    x, y = 0, 0
    count = 0
    
    for i in range(n):
        x, y = get_randpoint()
        if math.sqrt(x * x + y * y) <= 1:
            count += 1

    pi = 4.0 * count / n
    print pi
        
        

def main():
    L = [100, 1000, 10000, 100000, 1000000]
    for i in range(len(L)):
        compute_pi(L[i])

if __name__ == '__main__':
    main()
    print math.pi

### MCMC Metropolis-Hastings Python Code Implementation The following section provides a detailed explanation along with a practical example of implementing the Metropolis-Hastings algorithm, which falls under Markov Chain Monte Carlo methods used extensively in Bayesian statistics and computational physics. #### Understanding the Algorithm Metropolis-Hastings allows sampling from probability distributions when direct sampling is not feasible. The process involves proposing new states based on current ones and accepting or rejecting these proposals according to certain rules designed to ensure convergence towards the target distribution over time[^4]. #### Practical Example Using PyMC Library To implement this method efficiently within Python, one can utilize libraries like `pymc`, specifically tailored for probabilistic programming tasks including those involving complex statistical modeling scenarios where traditional approaches might fall short due to their inherent limitations regarding flexibility and scalability requirements imposed by modern datasets sizes and structures encountered across various domains such as machine learning applications among others[^1]: ```python import pymc as pm import numpy as np # Define model parameters mu_true = 5 sigma_true = 2 with pm.Model() as model: mu = pm.Normal('mu', mu=0, sigma=10) # Prior for mean parameter y_observed = pm.Normal( 'y_observed', mu=mu, sigma=sigma_true, observed=np.random.normal(loc=mu_true, scale=sigma_true, size=100)) # Likelihood function using synthetic data points generated around true value trace = pm.sample(1000, tune=500) pm.plot_posterior(trace); ``` This script demonstrates how easily one could define priors alongside likelihood functions while leveraging built-in samplers provided out-of-the-box through high-level abstractions offered by specialized packages aimed at simplifying otherwise intricate procedures involved during inference stages associated particularly with hierarchical models characterized often times by multiple layers interdependencies between variables leading ultimately toward more robust solutions compared against classical alternatives available traditionally within standard scientific computing environments lacking support necessary features required handling uncertainty quantification aspects effectively throughout entire analysis pipelines starting from initial problem formulation all way up until final results interpretation phases inclusive. For users preferring manual implementations without relying upon external dependencies beyond core language constructs alone: ```python def metropolis_hastings(target_distribution, proposal_function, n_samples): samples = [] x_current = 0 # Initial state; may be set differently depending on context. for _ in range(n_samples): x_proposed = proposal_function(x_current) acceptance_ratio = min(1., ( target_distribution(x_proposed) / target_distribution(x_current))) if np.random.rand() < acceptance_ratio: x_current = x_proposed samples.append(x_current) return np.array(samples) def normal_pdf(x, mu=0, sigma=1): """A simple Gaussian PDF.""" coefficient = 1/(sigma * np.sqrt(2*np.pi)) exponent = -(pow((x-mu)/sigma, 2)/2.) return coefficient * pow(np.e, exponent) proposals = lambda x: x + np.random.randn() samples = metropolis_hastings(normal_pdf, proposals, 10_000) print(f"Sampled values:\n{samples}") ``` In this standalone version, note that both the target density (`normal_pdf`) and transition kernel (`proposals`) must explicitly defined per specific use case needs prior execution since no assumptions made concerning underlying structure apart what's directly coded inside respective definitions themselves here presented merely illustrative purposes rather than ready-to-deploy production-grade software components intended real-world deployments requiring rigorous testing cycles beforehand deployment into live systems subject stringent quality assurance standards typically enforced enterprise settings especially financial services sector healthcare industry etcetera. --related questions-- 1. How does the choice of proposal distribution affect the efficiency of the Metropolis-Hastings sampler? 2. What are some common challenges faced when applying MCMC techniques to large-scale problems? 3. Can you explain why burn-in periods are important in MCMC simulations? 4. In what ways do adaptive MCMC algorithms improve upon basic Metropolis-Hastings schemes?
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值