ECE1747H Parallel Programming

CUDA实现并行扫描作业介绍
部署运行你感兴趣的模型镜像


 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

Assignment 2: Parallelize What Seems
Inherently Sequential
Introduction

In parallel computing, there are operations that, at first glance, seem inherently sequential but can
be transformed and executed efficiently in parallel. One such operation is the "scan". At its
essence, the scan operation processes an array to produce a new array where each element is
the result of a binary associative operation applied to all preceding elements in the original array.
Consider an array of numbers, and envision producing a new array where each element is the
sum of all previous numbers in the original array. This type of scan that uses "+" as the binary
operator is commonly known as a "prefix-sum".  Scan has two primary variants: exclusive and
inclusive. In an exclusive scan, the result at each position excludes the current element, while in
an inclusive scan, it includes the current element. For instance, given an array [3, 1, 7, 0] and
an addition operation, an exclusive scan would produce [0, 3, 4, 11] , and an inclusive scan
would produce [3, 4, 11, 11] . 
Scan operations are foundational in parallel algorithms, with applications spanning from sorting to
stream compaction, building histograms and even more advanced tasks like constructing data
structures in parallel. In this assignment, we'll delve deep into the intricacies of scan, exploring its
efficient implementation using CUDA.

Assignment Description

In this assignment, you will implement a parallel scan using CUDA. Let's further assume that the
scan is inclusive and the operator involved in the scan is addition. In other words, you will be
implementing an inclusive prefix sum.
The following is a sequential version of inclusive prefix sum:

void sequential_scan(int *x, int *y, unsigned int N) {
  y[0] = x[0];
  for(unsigned int i = 1; i < N; ++i) {
    y[i] = y[i - 1] + x[i];
  }
}

While this might seem like a task demanding sequential processing, with the right algorithm, it can
be efficiently parallelized. Your parallel implementation will be compared against the sequential
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming
 2/8

version which runs on the CPU. The mark will be based on the speedup achieved by your
implementation. Note that data transfer time is not included in this assignment. However, in real
world applications, data transfer in often a bottleneck and is important to include that in the
speedup calculation.

Potential Algorithms

 In this section, I describe a few algorithms to implement a parallel scan on GPU, which you may
use for this assignment. Of course, you may also choose to use other algorithms. These
algorithms are chosen for their simplicity and may not be the fastest.
We will first present algorithms for performing parallel segmented scan, in which every thread
block will perform a scan on a segment of elements in the input array in parallel. We will then
present methods that combine the segmented scan results into the scan output for the entire input
array.

Segmented Scan Algorithms

The exploration of parallel solutions for scan problems has a long history, spanning several
decades. Interestingly, this research began even before the formal establishment of Computer
Science as a discipline. Scan circuits, crucial to the operation of high-speed adder hardware like
carry-skip adders, carry-select adders, and carry-lookahead adders, stand as evidence of this
pioneering research.
As we know, the fastest parallel method to compute the sum of a set of values is through a
reduction tree. Given enough execution units, this tree can compute the sum of N values in
log2(N) time units. Additionally, the tree can produce intermediate sums, which can be used to
produce the scan (prefix sum) output values. This principle is the foundation of the design of both
the Kogge-Stone and Brent-Kung adders.

Brent-Kung Algorithm
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming
 3/8

The above figure show the steps for a parallel inclusive prefix sum algorithm based on the BrentKung
 adder design. The top half of the figure produces the sum of all 16 values in 4 steps. This is
exactly how a reduction tree works. The second part of the algorithm (bottom half of the figure) is
to use a reverse tree to distribute the partial sums and use them to complete the result of those
positions. 

Kogge-Stone Algorithm

The Kogge-Stone algorithm is a well-known, minimum-depth network that uses a recursivedoubling
 approach for aggregating partial reductions. The above figure shows an in-place scan
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming
 4/8

algorithm that operates on an array X that originally contains input values. It iteratively evolves the
contents of the array into output elements. 
In the first iteration, each position other than X[0] receives the sum of its current content and that
of its left neighbor. This is illustrated by the first row of addition operators in the figure. As a result,
X[i] contains xi-1 +xi. In the second iteration, each position other than X[0] and X[1] receives the
sum of its current content and that of the position that is two elements away (see the second row
of adders). After k iterations, X[i] will contain the sum of up to 2^k input elements at and before the
location. 
Although it has a work complexity of O(nlogn), its shallow depth and simple shared memory
address calculations make it a favorable approach for SIMD (SIMT) setups, like GPU warps.

Scan for Arbitrary-length Inputs

For many applications, the number of elements to be processed by a scan operation can be in the
millions or even billions. The algorithms that we have presented so far perform local scans on
input segments. Therefore, we still need a way to consolidate the results from different sections.

Hierarchical Scan

One of such consolidation approaches is hierarchical scan. For a large dataset we first partition
the input into sections so that each of them can fit into the shared memory of a streaming
multiprocessor (GPU) and be processed by a single block. The aforementioned algorithms can be
used to perform scan on each partition. At the end of the grid execution, the Y array will contain
the scan results for individual sections, called scan blocks (see the above figure). The second
step gathers the last result elements from each scan block into an array S and performs a scan on
these output elements. In the last step of the hierarchical scan algorithm, the intermediate result in
S will be added to the corresponding elements in Y to form the final result of the scan.
For those who are familiar with computer arithmetic circuits, you may already recognize that the
principle behind the hierarchical scan algorithm is quite similar to that of carry look-ahead adders
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming
 5/8

in modern processor hardwares.

Single Pass Scan

One issue with hierarchical scan is that the partially scanned results are stored into global
memory after step 1 and reloaded from global memory before step 3. The memory access is not
overlapped with computation and can significantly affect the performance of the scan
implementation (as shown in the above figure).
There exists many techniques proposed to mitigate this issue. Single-pass chained scan (also
called stream-based scan or domino-style scan) passes the partial sum data in one directory
across adjacent blocks. Chained-scan is based on a key observation that the global scan step
(step 2 in hierarchical scan) can be performed in a domino fashion (i.e. from left to right, and the
output can be immediately used). As a result, the global scan step does not require a global
synchronization after it, since each segment only needs the partial sum of segments before itself.

Further Reading

Parallel Prefix Sum (Scan) with CUDA


Single-pass
 Parallel Prefix Scan with Decoupled Look-back


Report
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming


Along with your code, you will also need to submit a report. Your report should describe the
following aspects in detail:
Describe what algorithm did you choose and why.
Describe any design decisions you made and why. Explain how they might affect performance.
Describe anything you tried (even they are not in the final implementation) and if they worked
or not. Why or why not.
Analyze the bottleneck of your current implementation and what are the potential
optimizations.
Use font Times New Roman, size 10, single spaced. The length of the report should not exceed 3
pages.

Setup

Initial Setup

Start by unzipping the provided starter code a2.zip

 into a protected directory within your
UG home directory. There are a multiple files in the provided zip file, the only file you will need
to modify and hand in is implementation.cu. You are not allowed to modify other files as only
your implementation.cu file will be tested for marking.
Within implementations.cu, you need to insert your identification information in the
print_team_info() function. This information is used for marking, so do it right away before you
start the assignment.

Compilation

The assignment uses GNU Make to compile the source code. Run make in the assignment
directory to compile the project, and the executable named ece1747a2 should appear in the same
directory.

Coding Rules

The coding rule is very simple.
You must not use any existing GPU parallel programming library such as thrust and cub. 
You may implement any algorithm you want.
Your implementation must use CUDA C++ and compilable using the provided Makefile. 
You must not interfere or attempt to alter the time measurement mechanism.
Your implementation must be properly synchronized so that all operations must be finished
before your implementation returns.

Evaluation
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming
 7/8

The assignment will be evaluated on an UG machine equipped with Nvidia GPU. Therefore, make
sure to test your implementation on the UG machines before submission. When you evaluate your
implementation using the command below, you should receive similar output.

ece1747a2 -g
************************************************************************************
Submission Information:
nick_name: default-name
student_first_name: john
student_last_name: doe
student_student_number: 0000000000
************************************************************************************
Performance Results:
Time consumed by the sequential implementation: 124374us
Time consumed by your implementation: 125073us
Optimization Speedup Ratio (nearest integer): 1
************************************************************************************

Marking Scheme

The total available marks for the assignment are divided as follows: 20% for the lab report, 65%
for the non-competitive portion, and 15% for the competitive portion. The non-competitive section
is designed to allow individuals who put in minimal effort to pass the course, while the competitive
section aims to reward those who demonstrate higher merit.

Non-competitive Portion (65%)

Achieving full marks in the non-competitive portion should be straightforward for anyone who puts
in the minimal acceptable amount of effort. You will be awarded full marks in this section if your
implementation achieves a threshold speedup of 30x. Based on submissions during the
assignment, the TA reserves the right to adjust this threshold as deemed appropriate, providing at
least one week's notice.

Competitive Portion (15%)

Marks in this section will be determined based on the speedup of your implementation relative to
the best and worst speedups in the class. The formula for this is:

mark = (your speedup - worst speedup over threshold) / (top speedup - worst speedup over threshold)

Throughout the assignment, updates on competitive marks will be posted on Piazza at intervals
not exceeding 24 hours.
 The speedup will be measure on a standard UG machine equipped with GPU. (Therefore, make
sure to test your implementations on the UG machines). The final marking will be performed after
the submission deadline on all valid submissions.

Submission

Submit your report on Quercus. Make sure your report is in pdf format and can be viewed with
standard pdf viewer  (e.g. xpdf or acroread).
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming
 8/8

When you have completed the lab, you will hand in just implementation.cu that contains your
solution. The standard procedure to submit your assignment is by typing submitece1747f 2
implementation.cu on one of the UG machines.
Make sure you have included your identifying information in the print team info() function.
Remove any extraneous print statements.

您可能感兴趣的与本文相关的镜像

PyTorch 2.5

PyTorch 2.5

PyTorch
Cuda

PyTorch 是一个开源的 Python 机器学习库,基于 Torch 库,底层由 C++ 实现,应用于人工智能领域,如计算机视觉和自然语言处理

内容概要:本文介绍了一个基于多传感器融合的定位系统设计方案,采用GPS、里程计和电子罗盘作为定位传感器,利用扩展卡尔曼滤波(EKF)算法对多源传感器数据进行融合处理,最终输出目标的滤波后位置信息,并提供了完整的Matlab代码实现。该方法有效提升了定位精度与稳定性,尤其适用于存在单一传感器误差或信号丢失的复杂环境,如自动驾驶、移动采用GPS、里程计和电子罗盘作为定位传感器,EKF作为多传感器的融合算法,最终输出目标的滤波位置(Matlab代码实现)机器人导航等领域。文中详细阐述了各传感器的数据建模方式、状态转移与观测方程构建,以及EKF算法的具体实现步骤,具有较强的工程实践价值。; 适合人群:具备一定Matlab编程基础,熟悉传感器原理和滤波算法的高校研究生、科研人员及从事自动驾驶、机器人导航等相关领域的工程技术人员。; 使用场景及目标:①学习和掌握多传感器融合的基本理论与实现方法;②应用于移动机器人、无人车、无人机等系统的高精度定位与导航开发;③作为EKF算法在实际工程中应用的教学案例或项目参考; 阅读建议:建议读者结合Matlab代码逐行理解算法实现过程,重点关注状态预测与观测更新模块的设计逻辑,可尝试引入真实传感器数据或仿真噪声环境以验证算法鲁棒性,并进一步拓展至UKF、PF等更高级滤波算法的研究与对比。
### 回答1: ECE R13H是欧洲经济委员会(Economic Commission for Europe)颁布的标准,涉及汽车被动安全系统中的儿童安全座椅。该标准旨在确保儿童在汽车中的安全,并规定了儿童安全座椅的设计、生产和测试要求。 根据ECE R13H标准,儿童安全座椅需要符合一系列的技术要求,以确保在车辆碰撞时提供充分的保护。例如,座椅的材料需要具备一定的抗冲击能力,并且必须经过严格的测试,以保证其结构的稳定性和耐久性。 此外,该标准还规定儿童安全座椅的尺寸和形状,以适应不同年龄段的儿童,并确保座椅能够确实地固定在车辆座位上。安全带的设计以及其固定系统也要符合特定要求,以防止座椅在碰撞中移动或脱落,从而最大程度地保护儿童。 最后,ECE R13H标准还要求儿童安全座椅提供舒适的乘坐体验,避免对儿童身体造成过度压力或不适。此外,座椅上需要有清晰的标识,以确保用户正确地安装和使用儿童安全座椅。 总之,ECE R13H是一项针对儿童安全座椅的欧洲标准,它确保了儿童在汽车中的乘坐安全。符合该标准的儿童安全座椅具备一定的抗冲击能力、固定性能和舒适性,并且需要经过严格的测试和标识要求。这项标准在欧洲范围内被广泛应用,以保护儿童的乘车安全。 ### 回答2: "ECE R13H"是指经济委员会欧洲法规13号修订版(Economic Commission for Europe Regulation 13H),该法规是针对车辆前照灯设备的技术要求制定的。这个规定旨在确保车辆前照灯设备的安全性、可见性和质量。这些要求包括灯光强度、照明分布、颜色、曝光时间、反射率、耐久性等方面的规定。 该法规中的“H“表示对汽车前照灯设备进行一般规定的同时,还对具有较高高度设置的灯具(例如大型卡车)提出了特殊的要求。这些特殊要求能够提供更好的道路照明效果,保障道路行驶的安全性。 在该法规中,对于生产和销售灯具的厂商,需要对产品进行严格的测试和认证,以确保其符合法规制定的技术要求。此外,该法规还对使用非法修理或不符合要求的替代部件进行严格控制,以防止非法操作和减少交通事故的风险。 由于ECE R13H是欧洲经济委员会的法规,因此对于欧洲国家的汽车制造商和使用者来说,遵守该法规是强制性的。通过遵守该法规,可以确保车辆前照灯设备的技术性能和安全性能得到保证,提高夜间行驶的可见性,减少交通事故的发生。 总之,ECE R13H中文版是关于车辆前照灯设备技术要求的一项法规,旨在保障车辆行驶安全和夜间行驶的可见性。遵守该法规对于汽车制造商和车辆使用者来说是必要的,以确保车辆装备的前照灯设备符合安全标准。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值