Easy and cheap cluster building on AWS backup

本文介绍如何利用AWS的Spot实例快速且经济地建立大规模并行计算集群。通过自动化配置脚本、SSHFS文件传输及GNU Parallel任务分配等工具,实现高效的数据处理流程。即便在临时实例上也能保障数据的安全性和计算任务的连续性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

https://grapeot.me/easy-and-cheap-cluster-building-on-aws.html

Thu 17 July 2014 , by Yan Wang | 2 Comments Linux Parallel github Image

Why?

It often requires a lot of computational resources to do machine learning / computer vision research, like extracting features from a lot of images, and training large-scale or many classifiers. Therefore people use more than one machines to do the task. The procedures are often like, copy executable/data files to all the machines, configure environments, manually divide the tasks, actually run the commands, and collect the results. In addition to the complicated workflow, another practical problem is where to get the machines. Maintaining your own cluster is definitely an option, an extremely expensive and time-costing option. Renting from AWS, especially using spot instances, is a much cheaper and more practical alternative.

But a lot of factors prevent them to be really useful (I assume you already know how spot instances work):

  • Spot instances don't have persistent storage, which means whatever you have on the hard disk may lost in the next minute. How to deal with this?
  • This property of spot instances also makes system configuration a problem -- how do you easily make a blank system usable?
  • How to efficiently copy bulk of data to AWS?
  • Manual task division and command execution doesn't sound right. How to make it easier and smarter (and faster)?

After quite a few months, I gradually accumulate a tool chain to handle all of these problems.

What will you get?

Here is an example of a 128-core 240GB cluster. It requires ~10 minutes to build it from scratch (or ~1 minute to build from AMI image), and costs about 1 dollar per hour. Like any AWS instances, the instances themselves cost nothing if you don't use them (by shutting them down). All your data will be on your hard disk and the loss due to spot request failure will be minimized. The best thing is, task submission is fairly simple -- one single line of bash command will do the job, like

cat cluster.sh | parallel --sshlogin 8/m1 --sshlogin 8/m2 --sshlogin 8/m3 --sshlogin 8/m4 bash -c '{}'

It will automatically distribute every line of cluster.sh to the four nodes, and display all the stdouts on your screen. Whenever a node has less than 8 tasks running, the script will automatically dispatch one to it.

How? (TL; DR)

  • Use automated script to do fast system configuration.
  • Use sshfs to do selective file transfer with compression, including training data transfer and result collection.
  • Use GNU parallel to do job submission.
  • AMI can also be used to further expedite virtual machine initialization

How?

  1. Create spot instances on AWS.
  2. On each machine, run curl https://grapeot.me/aws.sh | sh if that fits you. Orgit clone http://github.com/grapeot/debianinit and execute setup-ubuntu.sh to initialize the system. Note the script is personalized for me with python and vim support. Folk it to add your own stuffs.
  3. That's it for configuration. To submit jobs, use parallel. Let's look at this example:
cat cluster.sh | parallel --sshlogin 8/m1 --sshlogin 8/m2 --sshlogin 8/m3 --sshlogin 8/m4 bash -c '{}'

We already explained what it means, and here are more details. For switches like --sshlogin 8/m1--sshlogin means to send the task to remote machines. 8/m1 tells parallel to send it to a ssh host named m1, which you can configure in ~/.ssh/config, and maintain at most 8 tasks on that host. bash -c '{}' is the actual command to execute on the remote machine, with {} as the placeholder for each line from stdinparallel is much more flexible than this, and I'd leave the exploration of more switches and usage to you. :)

转载于:https://www.cnblogs.com/huashiyiqike/p/3855539.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值