吴恩达 深度学习 编程作业(5-3)Part 1 - Neural Machine Translation

本博客介绍了吴恩达DeepLearning.ai课程中的编程作业,内容涉及神经机器翻译(NMT),特别是使用注意力机制将人类可读日期转换为机器可读日期。作业涵盖数据预处理、注意力模型的实现,并通过实例展示了注意力权重的可视化,以理解模型在生成不同输出部分时关注输入的哪些部分。

吴恩达 Coursera 课程 DeepLearning.ai 编程作业系列,本文为《序列模型》部分的第三周“序列模型和注意力机制”的课程作业——第一部分:机器翻译。

另外,本节课程笔记在此:《吴恩达Coursera深度学习课程 DeepLearning.ai 提炼笔记(5-3)– 序列模型和注意力机制》,如有任何建议和问题,欢迎留言。


Neural Machine Translation

Welcome to your first programming assignment for this week!

You will build a Neural Machine Translation (NMT) model to translate human readable dates (“25th of June, 2009”) into machine readable dates (“2009-06-25”). You will do this using an attention model, one of the most sophisticated sequence to sequence models.

This notebook was produced together with NVIDIA’s Deep Learning Institute.

Let’s load all the packages you will need for this assignment.

from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np

from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline

You can get the model file and nmt_utils python programs from here

1 - Translating human readable dates into machine readable dates

The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler “date translation” task.

The network will input a date written in a variety of possible formats (e.g. “the 29th of August 1958”, “03/30/1968”, “24 JUNE 1987”) and translate them into standardized, machine readable dates (e.g. “1958-08-29”, “1968-03-30”, “1987-06-24”). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.

1.1 - Dataset

We will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let’s run the following cells to load the dataset and print some examples.

m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
[('9 may 1998', '1998-05-09'),
 ('10.09.70', '1970-09-10'),
 ('4/28/90', '1990-04-28'),
 ('thursday january 26 1995', '1995-01-26'),
 ('monday march 7 1983', '1983-03-07'),
 ('sunday may 22 1988', '1988-05-22'),
 ('tuesday july 8 2008', '2008-07-08'),
 ('08 sep 1999', '1999-09-08'),
 ('1 jan 1981', '1981-01-01'),
 ('monday may 22 1995', '1995-05-22')]

You’ve loaded:
- dataset: a list of tuples of (human readable date, machine readable date)
- human_vocab: a python dictionary mapping all characters used in the human readable dates to an integer-valued index
- machine_vocab: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with human_vocab.
- inv_machine_vocab: the inverse dictionary of machine_vocab, mapping from indices back to characters.

Let’s preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since “YYYY-MM-DD” is 10 characters long).

Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)

print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
X.shape: (10000, 30)
Y.shape: (10000, 10)
Xoh.shape: (10000, 30, 37)
Yoh.shape: (10000, 10, 11)

You now have:
- X: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via human_vocab. Each date is further padded to Tx values with a special character (< pad >). X.shape = (m, Tx)
- Y: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in machine_vocab. You should have Y.shape = (m, Ty).
- Xoh: one-hot version of X, the “1” entry’s index is mapped to the character thanks to human_vocab. Xoh.shape = (m, Tx, len(human_vocab))
- Yoh: one-hot version of Y, the “1” entry’s index is mapped to the character thanks to machine_vocab. Yoh.shape = (m, Tx, len(machine_vocab)). Here, len(machine_vocab) = 11 since there are 11 characters (‘-’ as well as 0-9).

Lets also look at some examples of preprocessed training examples. Feel free to play with index in the cell below to navigate the dataset and see how source/target dates are preprocessed.

index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
Source date: 9 may 1998
Target date: 1998-05-09

Source after preprocessing (indices): [12  0 24 13 34  0  4 12 12 11 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36
 36 36 36 36 36]
Target after preprocessing (indices): [ 2 10 10  9  0  1  6  0  1 10]

Source after preprocessing (one-hot): [[ 0.  0.  0. ...,  0.  0.  0.]
 [ 1.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 ..., 
 [ 0.  0.  0. ...,  0.  0.  1.]
 [ 0.  0.  0. ...,  0.  0.  1.]
 [ 0.  0.  0. ...,  0.  0.  1.]]
Target after preprocessing (one-hot): [[ 0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.]
 [ 1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.]
 [ 1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.]]

2 - Neural machine translation with attention

If you had to translate a book’s paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.

The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.

2.1 - Attention mechanism

In this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one “Attention” step does to calculate the attention variables αt,t , which are used to compute the context variable contextt for each timestep in the output ( t=1,,Ty ).

还在路上,稍等... 还在路上,稍等...

Figure 1: Neural machine translation with attention

Here are some properties of the model that you may notice:

  • There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes before the attention mechanism, we will call it pre-attention Bi-LSTM. The LSTM at the top of the diagram comes after the attention mechanism, so we will call it the post-attention LSTM. The pre-attention Bi-LSTM goes through Tx time steps; the post-attention LSTM goes through Ty time steps.

  • The post-attention LSTM passes st,ct from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations st . But since we are using an LSTM here, the LSTM has both the output activation st and the hidden cell state ct . However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time t does will not take the specific generated yt1 as input; it only takes st and ct as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn’t as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.

  • We use at=[at;a

### ICC 编译器概述 ICC 是 Intel C++ Compiler 的缩写,由英特尔公司开发。该编译器专为优化基于英特尔架构的应用程序性能而设计[^1]。 ### 安装与配置 为了使用 ICC 编译器,需先安装相应版本。通常通过命令行工具 `icc` 来调用编译器。对于大多数 Linux 发行版,默认情况下会将 ICC 添加到系统的 PATH 环境变量中。 ### 基本语法结构 ICC 支持标准的 GCC 风格选项以及特有的优化标志。基本命令格式如下: ```bash icc [options] filenames... ``` 其中 `[options]` 表示各种可选参数,用于控制编译过程中的行为;`filenames...` 则是要处理的一个或多个源文件名。 ### 特定功能介绍 #### 自动向量化 ICC 提供强大的自动并行化能力,能够识别循环体内可以并行执行的部分,并将其转换成 SIMD (Single Instruction Multiple Data) 指令集形式来加速计算密集型任务。可以通过 `-vec-report=n` 参数查看向量化的报告级别 n=0~6。 #### 多线程支持 利用 OpenMP API 实现多核处理器上的高效并发编程。只需简单地标记代码区域即可让编译器自动生成必要的同步机制和工作分配逻辑。例如,在给定的例子中定义了默认的工作线程数 `num_threads = DEFAULT_NUM_THREADS;`。 #### Profile-Guided Optimization (PGO) 此特性允许应用程序根据实际运行时的行为模式来进行更精细级别的优化。具体操作分为两个阶段:收集数据阶段和应用优化阶段。前者需要启用特定标记如 `-prof-gen` 进行采样分析;后者则采用 `-prof-use` 将之前获得的信息应用于最终构建过程中。 ### 示例代码片段展示如何设置编译选项 下面是一个简单的例子展示了怎样指定不同的警告等级和其他常用选项: ```bash # 使用最高级别的优化(-O3),同时开启浮点模型严格模式(-fp-model strict) $ icc -o myprogram -O3 -fp-model strict source.cpp # 启用OpenMP支持(-qopenmp), 并生成详细的诊断消息(-diag-enable sc) $ icc -qopenmp -diag-enable sc program.c ```
评论 3
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值