Computational Finance with Python

本文介绍了使用Python在计算金融中应用Black-Scholes模型和局部波动性模型进行期权定价的方法,包括历史波动率估计、二叉树近似和蒙特卡洛及有限差分技术。强调学术诚信和代码提交要求,以及对WarnerBros.Discovery股票价格实例的分析。
部署运行你感兴趣的模型镜像


Summative project
Computational Finance with Python
Table of contents
General information 1
Marking and credit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Academic integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Submission instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1 Black-Scholes model 3
1.1 Historical volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Binomial tree approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Local volatility model 8
2.1 Option pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1 Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.2 Finite difference methods . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Theta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
References 10
General information
Each student has a unique individualised assignment. Download your assignment directly from the submission point on Moodle.
1
Marking and credit
This project contributes 90% towards the final mark for the module.
The project will be marked anonymously. Do not include your name, student number or
any identifying details in your work.
In each exercise, marks are allocated for both coding (style, clarity and correctness) and
accuracy of numerical results, as well as explanations and written work (including comments and docstrings in the code). The marks indicated are indicative only.
Partial credit will be given for partial completion of an exercise, for example, if you are
able to do a Monte Carlo estimate with fewer paths or time steps.
It may prove impossible to examine projects containing code that does not run, or does
not allow data to be changed. In such cases a mark of 0 will be recorded. It is therefore
essential that all files are tested thoroughly in Colab before being submitted in Moodle.
Academic integrity
You may use and adapt any code submitted by yourself as part of this module, including
your own solutions to the exercises. You may use and adapt all Python code provided
in Moodle or on GitHub as part of this module, without the need for acknowledgement.
Any code not written by yourself must be acknowledged and referenced.
This is an individual project. You must not collaborate with anybody else or use code
that has not been provided in Moodle without acknowledgment. This includes, for example, discussing the questions with other students, actively using discussion forums or
using generative artificial intelligence (such as ChatGPT). Please consult the University’s
Student guidance on using AI and translation tools and Policy on Acceptable Assistance
with Assessment. Collusion and plagiarism detection software will be used to detect
academic misconduct, and suspected offences will be investigated.
Submission instructions
Submit your work by uploading it in Moodle by 4pm on 31 May. Work should be submitted as follows:
• Code (and numerical results) should be submitted in the form of Colab notebooks.
Code will be tested using unseen test data: use variables to store parameters rather
than values hard-wired into your code.
2
• Written text should be submitted in pdf format. Scans of handwritten text are acceptable, provided that they are legible.
• Use a separate set of files (ipynb and pdf, as appropriate) for parts 1 (Black-Scholes
model) and 2 (Local volatility model). “Part1.ipynb” or “part1.pdf” are examples of
sensible filenames.
Late submissions will incur a penalty according to the standard rules for late submission
of assessed work. It is advisable to allow enough time (at least one hour) to upload your
files to avoid possible congestion in Moodle. In the unlikely event of technical problems
in Moodle please email your files in a zip archive to alet.roux@york.ac.uk before the
deadline.
1 Black-Scholes model
Consider a risky asset with stock price S that follows geometric Brownian motion below
under the risk neutral measure Q, in other words,
 dS_t = r S_t dt + \sigma S_t dW^Q_t
or, equivalently,
 S_t = S_0e^{(r-\sigma ^2/2)t + \sigma W^Q_t},
where r is the risk free rate, \sigma is the volatility and W^Q is a standard Brownian motion
under Q.
The following code accesses data about a company Warner Bros. Discovery, Inc. using
Yahoo Finance.
import yfinance as yf 1
# create Ticker object which will allow data access
data = yf.Ticker('WBD')
# name of company
print("Name of company:", data.info['longName'])
# summary of company business
import textwrap 2
print("\n", textwrap.fill(data.info['longBusinessSummary'], 65))
3
1 See Arrousi (2024) for more details and usage instructions. This package is available
in Google Colab, but on local installations (such as Anaconda) it may need to be
installed before use—Arrousi (2024) provides instructions.
2 See The Python Software Foundation (2024) for more details on wrapping text.
Name of company: Warner Bros. Discovery, Inc.
Warner Bros. Discovery, Inc. operates as a media and
entertainment company worldwide. It operates through three
segments: Studios, Network, and DTC. The Studios segment produces
and releases feature films for initial exhibition in theaters;
produces and licenses television programs to its networks and
third parties and direct-to-consumer services; distributes films
and television programs to various third parties and internal
television; and offers streaming services and distribution
through the home entertainment market, themed experience
licensing, and interactive gaming. The Network segment comprises
domestic and international television networks. The DTC segment
offers premium pay-tv and streaming services. In addition, the
company offers portfolio of content, brands, and franchises
across television, film, streaming, and gaming under the Warner
Bros. Motion Picture Group, Warner Bros. Television Group, DC,
HBO, HBO Max, Max, Discovery Channel, discovery+, CNN, HGTV, Food
Network, TNT Sports, TBS, TLC, OWN, Warner Bros. Games, Batman,
Superman, Wonder Woman, Harry Potter, Looney Tunes, HannaBarbera, Game of Thrones, and The Lord of the Rings brands.
Further, it provides content through distribution platforms,
including linear network, free-to-air, and broadcast television;
authenticated GO applications, digital distribution arrangements,
content licensing arrangements, and direct-to-consumer
subscription products. Warner Bros. Discovery, Inc. was
incorporated in 2008 and is headquartered in New York, New York.
The following code downloads 6 months of daily prices and stores it in a NumPy array S.
# Download 6 months of price data (daily closing prices)
hist = data.history(period="6mo")
4
# Store closing prices in NumPy array S
import numpy as np
S = np.array(hist["Close"])
# Plot closing prices
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize = (8,6))
ax.set(title = f"Closing prices of {data.info['longName']}",
ylabel = "Price", xlabel = "Time step")
ax.xaxis.grid(True)
ax.yaxis.grid(True)
ax.plot(S)
plt.show()
5
1.1 Historical volatility
A popular way of calibrating the volatility \sigma is to estimate it from historical data. Suppose
that stock prices S_i (i=0,1,\ldots ,m ) have been recorded at some fixed time intervals of
length \tau >0 measured in years (for example, \tau =1/12 for monthly recorded stock prices).
The log returns on these prices over each time interval of length \tau are
 R_i=\ln (S_i/S_{i-1})
for i=1,\ldots ,m . The sample variance of these returns is s^2 , where
 s^2=\frac {1}{m-1}\sum _{i=1}^m\left (R_{i}-\overline {R}\right )^{2},
where \protect \overline {R} = \frac {1}{m}\sum _{i=1}^m R_{i}. The sample variance s^2 estimates the variance \sigma ^2\tau of the log return
on the stock price over a time interval of length \tau . It follows that
 \hat {\sigma }_{\text {hist}}=\frac {s}{\sqrt {\tau }}
is an estimate for the volatility \sigma . This is called historical volatility.
Exercise 1.1 (Coding task: 10% of project mark). Write a function that calculates the historical volatility of a given set of stock prices. It should have two arguments: a NumPy array
of stock prices, as well as \tau , and return the estimate \protect \hat {\sigma }_{\text {hist}} .
Then use your code to calculate and display the historical volatility of the Warner
Bros. Discovery, Inc. stock prices. You should copy and paste the code given above into
your Colab notebook in order to download the data.
1.2 Binomial tree approximation
The continuous process 写Computational Finance with PythonS can be approximated by a binary tree. One approach is based
on the log price process x defined as
 x_t = \ln S_t.
A binomial tree for x on the interval [0,T] can be built by taking \Delta t = T/M , where M is
the number of steps in the tree. The steps are at the equidistant times
 t_i = i\Delta t \text { for } i=0,\ldots ,M,
and x^{(M)}_i is the relevant approximation of x_{t_i} .
6
Consider the binomial tree approximation of x in which, at any time t_i and over a small
time interval \Delta t, the approximation x^{(M)}_i of x_{t_i} can go up by \Delta x (the space step, to be
determined), or go down by \Delta x, with probabilities q and 1-q , respectively, i.e.
 x^{(M)}_{i+1} = \begin {cases} x^{(M)}_i + \Delta x & \text {with probability } q \\ x^{(M)}_i - \Delta x & \text {with probability } 1-q. \end {cases}
The drift and volatility parameters of the continuous time process are now captured by
\Delta x and \Delta t, together with the parameters r and \sigma . We take x^{(M)}_0 = x_0 .
Exercise 1.2 (Binomial tree approximation: 25% of project mark). Design a binary tree
method that approximates the Black-Scholes model and can be used to price European
options. Your work should include the following:
1. Written work:
(a) Compute the conditional first and second moments of x_{t+\Delta t} - x_t in the
continuous-time model and x^{(M)}_{i+1}-x^{(M)}_i in the binomial tree model.
(b) Given the value of \Delta t, derive formulae for \Delta x and q in terms of r and \sigma by
matching the first and second moments of the continuous-time model and the
binomial tree model.
(c) State the algorithm used for pricing a European-style derivative security with
payoff at time T given by f(S_T) .
2. Write a Python function that uses the tree approximation to calculate the price at
time 0 of a European style derivative security with payoff at time T given by f(S_T) .
Your function should take the arguments S_0 , T, r, \sigma , M and f.
3. Use your code to price a derivative with payoff
 f(S) = \begin {cases} S^{ 1.9 } & \text {if } S < K, \\ 0 & \text {otherwise}\end {cases}
and the following data:
• T = 0.5 .
• r = 0.03 .
• \sigma = \hat {\sigma }_{\text {hist}} as calculated for the Warner Bros. Discovery, Inc. data (use \sigma =0.3 if
you were not able to calculate \protect \hat {\sigma }_{\text {hist}} ).
• S_0 as the final item in the array of Warner Bros. Discovery, Inc. data.
• M = 180 .
• K = S_0 .
7
2 Local volatility model
A model for a stock S is given by the stochastic differential equation
 \phantomsection \label {eq-local-vol-model}{dS_t = r S_t dt + \sigma (t, S_t) S_t dW_t^Q,} (1)
where r is the risk-free interest rate and W^Q is a Brownian motion under the risk-neutral
probability Q. The function \sigma is defined as
 \sigma (t,S) = 0.25 e^{ -0.03 t} \left (\frac { 105 }{S}\right )^{ 0.4 }.
The aim of this part of the project is to use Monte Carlo and finite difference techniques to
study a European style derivative where the payoff at time T=1 is
 X = \begin {cases} (S_T - 110)^+ & \text {if } S_t \ge 100 \text { for all } t\in [0,T], \\ 0 & \text {otherwise.} \end {cases}
Take S_0=£105 and r=0.05 .
2.1 Option pricing
The aim in this section is to approximate the price of the put option at time 0 for a range
of stock price values.
2.1.1 Monte Carlo method
Exercise 2.1 (Monte Carlo estimate: 30% of project mark). Implement Monte Carlo simulation to estimate the price of this derivative at time 0 under Q. Use the Euler method, and
antithetic variates to reduce the variance of the estimate.
Your work should include a statement of the Monte Carlo algorithm that you are using.
Report the Monte Carlo estimate of the price, the standard error and an approximate 95%
confidence interval with 10000 paths and 20000 steps per path.
Hint: To reduce the discretization error, apply the Euler method to the process Y defined
as Y_t=\ln S_t instead of to S directly. Use the Itô formula to derive a stochastic differential
equation for Y.
8
2.1.2 Finite difference methods
The price of a path-independent option at time t can be expressed as V(t,S_t) , where the
function V satisfies the partial differential equation
 \phantomsection \label {eq-local-vol-pde}{\frac {\partial V}{\partial t}(t,S) + r S \frac {\partial V}{\partial S}(t,S) + \tfrac {1}{2}\sigma ^2(t,S)S^2\frac {\partial ^2 V}{\partial S^2}(t,S) - rV(t,S) = 0} (2)
for all t\in [0,T) and S>0 , together with boundary condition
 V(t, 100) = 0
and final value
 V(T, S) = \begin {cases} (S - 110)^+ & \text {if } S \ge 100, \\ 0 & \text {otherwise}. \end {cases}
Exercise 2.2 (Backward Euler method: 25% of project mark). Design an implicit (backward Euler) finite difference scheme to approximate the solution to the partial differential
equation in Equation 2. Your work should include the following:
1. Convert the final value problem into an initial value problem, and define a suitable
grid that contains the points (0,100) and (0,S_0) .
2. Propose an appropriate boundary condition for S\rightarrow \infty .
3. Derive the iterative scheme in matrix form that would allow you to approximate
the value of V at the grid points. Give the lower diagonal, main diagonal and upper
diagonal elements of the matrix.
4. Write a Python function to implement the iterative scheme. Your function should
have suitable arguments and return values.
5. Compare your results to the Monte Carlo estimate. Is it possible to choose values
for the step sizes that result in the price V(0,S_0) being within the confidence interval
produced by the Monte Carlo method?
6. Report numerical results in an appropriate way, for example, via a 3d surface representing option prices for all t\in [0,T] and a 2d graph representing option prices at
time t=0 , all for a range of S.
2.2 Theta
Exercise 2.3 (Theta: 10% of project mark). Approximate the value of the theta \Theta = \frac {\partial V}{\partial \tau } at
all the grid points used for the finite difference approximation in Exercise 2.2. Your work
WX:codinghelp

您可能感兴趣的与本文相关的镜像

Python3.8

Python3.8

Conda
Python

Python 是一种高级、解释型、通用的编程语言,以其简洁易读的语法而闻名,适用于广泛的应用,包括Web开发、数据分析、人工智能和自动化脚本

代码转载自:https://pan.quark.cn/s/a4b39357ea24 本文重点阐述了利用 LabVIEW 软件构建的锁相放大器的设计方案及其具体实施流程,并探讨了该设备在声波相位差定位系统中的实际运用情况。 锁相放大器作为一项基础测量技术,其核心功能在于能够精确锁定微弱信号的频率参数并完成相关测量工作。 在采用 LabVIEW 软件开发的锁相放大器系统中,通过计算测量信号与两条参考信号之间的互相关函数,实现对微弱信号的频率锁定,同时输出被测信号的幅值信息。 虚拟仪器技术是一种基于计算机硬件平台的仪器系统,其显著特征在于用户可以根据实际需求自主设计仪器功能,配备虚拟化操作界面,并将测试功能完全由专用软件程序实现。 虚拟仪器系统的基本架构主要由计算机主机、专用软件程序以及硬件接口模块等核心部件构成。 虚拟仪器最突出的优势在于其功能完全取决于软件编程,用户可以根据具体应用场景灵活调整系统功能参数。 在基于 LabVIEW 软件开发的锁相放大器系统中,主要运用 LabVIEW 软件平台完成锁相放大器功能的整体设计。 LabVIEW 作为一个图形化编程环境,能够高效地完成虚拟仪器的开发工作。 借助 LabVIEW 软件,可以快速构建锁相放大器的用户操作界面,并且可以根据实际需求进行灵活调整和功能扩展。 锁相放大器系统的关键构成要素包括测量信号输入通道、参考信号输入通道、频率锁定处理单元以及信号幅值输出单元。 测量信号是系统需要检测的对象,参考信号则用于引导系统完成对测量信号的频率锁定。 频率锁定处理单元负责实现测量信号的锁定功能,信号幅值输出单元则负责输出被测信号的幅值大小。 在锁相放大器的实际实现过程中,系统采用了双路参考信号输入方案来锁定测量信号。 通过分析两路参考信号之间的相...
边缘计算环境中基于启发式算法的深度神经网络卸载策略(Matlab代码实现)内容概要:本文介绍了在边缘计算环境中,利用启发式算法实现深度神经网络任务卸载的策略,并提供了相应的Matlab代码实现。文章重点探讨了如何通过合理的任务划分与调度,将深度神经网络的计算任务高效地卸载到边缘服务器,从而降低终端设备的计算负担、减少延迟并提高整体系统效率。文中涵盖了问题建模、启发式算法设计(如贪心策略、遗传算法、粒子群优化等可能的候选方法)、性能评估指标(如能耗、延迟、资源利用率)以及仿真实验结果分析等内容,旨在为边缘智能计算中的模型推理优化提供可行的技术路径。; 适合人群:具备一定编程基础,熟悉Matlab工具,从事边缘计算、人工智能、物联网或智能系统优化方向的研究生、科研人员及工程技术人员。; 使用场景及目标:①研究深度神经网络在资源受限设备上的部署与优化;②探索边缘计算环境下的任务卸载机制与算法设计;③通过Matlab仿真验证不同启发式算法在实际场景中的性能表现,优化系统延迟与能耗。; 阅读建议:建议读者结合提供的Matlab代码进行实践操作,重点关注算法实现细节与仿真参数设置,同时可尝试复现并对比不同启发式算法的效果,以深入理解边缘计算中DNN卸载的核心挑战与解决方案。
### Design Complexity of TabTransformer and Transformer Models on Same Dataset When comparing the design complexities of the **TabTransformer** and the traditional **Transformer** models, several architectural and functional differences emerge, particularly when applied to the same dataset. These differences are most noticeable in how each model processes tabular data, handles feature interactions, and scales with increasing data dimensions. #### Embedding Mechanism The **TabTransformer** introduces a specialized embedding mechanism tailored for tabular data. Each categorical feature is passed through a learnable embedding layer, which maps discrete values into dense vectors. These embeddings are then combined with **column-wise positional encodings** to preserve the identity of each feature across the transformer layers. This structured embedding approach is more complex than the standard tokenization used in the traditional **Transformer**, which typically handles sequential data like text and uses a single embedding layer for all inputs. #### Attention Mechanism Both models utilize **multi-head self-attention** to capture long-range dependencies among input features. However, in the **TabTransformer**, the attention mechanism is adapted to focus on relationships between categorical and numerical features within a tabular structure. This requires careful design to ensure that the model does not lose the semantic meaning of individual columns during attention computation. In contrast, the standard **Transformer** applies attention uniformly across the sequence, which is less nuanced for structured data types like tables. #### Computational Complexity The computational complexity of both models is generally dominated by the self-attention mechanism, which scales as $ O(n^2) $ with respect to sequence length $ n $. However, in the **TabTransformer**, the effective sequence length may be smaller due to the structured nature of tabular data, potentially reducing the overall computational burden compared to the standard **Transformer**, which must process longer sequences in natural language tasks. #### Training and Optimization The **TabTransformer** often requires more careful tuning due to the heterogeneity of features in tabular data (e.g., categorical vs. numerical). It may also benefit from techniques like **column-wise dropout** or **feature-specific normalization** to prevent overfitting and improve generalization. The standard **Transformer**, especially when applied to language tasks, has well-established training protocols, such as **learning rate scheduling**, **layer-wise adaptive rate scaling (LARS)**, and **adaptive computation time**, which can make it slightly easier to optimize out-of-the-box on text-like data. #### Scalability In terms of scalability, the **TabTransformer** is generally more efficient when dealing with high-cardinality categorical features, as it can leverage embedding compression techniques specific to tabular data. The standard **Transformer** may struggle with such features unless they are preprocessed into token sequences, which can increase model size and training time. #### Interpretability The **TabTransformer** offers enhanced interpretability through its **column-aware attention mechanism**, allowing for feature importance analysis and better understanding of model decisions. This is particularly valuable in domains like healthcare or finance, where interpretability is crucial. The standard **Transformer**, while capable of attention visualization, does not inherently provide column-level interpretability when applied to generic sequences. #### Summary In summary, the **TabTransformer** introduces additional design complexity to better handle structured tabular data, including specialized embeddings, column-aware attention, and tailored training strategies. These features make it more complex than the standard **Transformer** when applied to the same dataset but also more effective in capturing the nuances of tabular structures. ```python # Example of defining a simple TabTransformer model using PyTorch import torch import torch.nn as nn class TabTransformer(nn.Module): def __init__(self, num_categories, embedding_dim, num_features, num_heads, num_layers): super().__init__() self.embeddings = nn.ModuleList([nn.Embedding(num_cat, embedding_dim) for num_cat in num_categories]) self.positional_encoding = nn.Parameter(torch.randn(1, num_features, embedding_dim)) self.transformer = nn.TransformerEncoder( nn.TransformerEncoderLayer(d_model=embedding_dim, nhead=num_heads), num_layers=num_layers ) self.classifier = nn.Linear(embedding_dim * num_features, 1) def forward(self, x): # x shape: (batch_size, num_features) embedded = [emb(x[:, i]) for i, emb in enumerate(self.embeddings)] x = torch.stack(embedded, dim=1) # (batch_size, num_features, embedding_dim) x += self.positional_encoding x = self.transformer(x) x = x.view(x.size(0), -1) return self.classifier(x) ```
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值