#-0 创建虚拟环境
```
mkdir -p python/scrapy
virtualenv --no-site-packages -p python3 python/scrapy
```
#如果出现:Command /home/tomblack/python/scrapy/bin/python3 - setuptools pkg_resources pip wheel failed with error code 2
#在shell中输入sudo pip3 install --upgrade pip http://python.123.io/ws/demo.html
```
(scrapy) tomblack@tomblack-Inspiron-7559:~$ (scrapy) tomblack@tomblack-Inspiron-7559:~$pip install Scrapy
```
#-1 创建我的第一个Scrapy实例
#常用命令1:scrapy startproject []
```
(scrapy) tomblack@tomblack-Inspiron-7559:~$ scrapy startproject python123demo
New Scrapy project 'python123demo', using template directory '/home/tomblack/python/scrapy/lib/python3.6/site-packages/scrapy/templates/project', created in:
/home/tomblack/python123demo
You can start your first spider with:
cd python123demo
scrapy genspider example example.com
```
#-2 创建一个spider:scrapy genspider [demo python123.io] [名字 链接]
```
(scrapy) tomblack@tomblack-Inspiron-7559:~/python123demo$ scrapy genspider demo python123.io
```
#生成一个demo.py
```
# -*- coding: utf-8 -*-
import scrapy
class DemoSpider(scrapy.Spider):
name = 'demo'
allowed_domains = ['python123.io'] #只能爬取这个域名以下的链接
start_urls = ['http://python123.io/'] #开始爬取的链接
def parse(self, response): #解析爬取的内容生成字典
pass
```
#-3 配置生成的spider爬虫,修改demo.py
```
# -*- coding: utf-8 -*-
import scrapy
class DemoSpider(scrapy.Spider):
name = 'demo'
#allowed_domains = ['python123.io']
start_urls = ['http://python123.io/ws/demo.html']
def parse(self, response):
fname = response.url.split('/')[-1]
with open(fname,'wb') as f:
f.read(response.body)
self.log('Saved file %s.',% fname)
pass
```
#-4 运行scrapy crwl [demo] [爬虫名字]
```
(scrapy) tomblack@tomblack-Inspiron-7559:~/python123demo$ scrapy crawl demo
```
#yield:生成器:不断产生值的函数,相较于一次性产生所有值,yield一次使用产生一个值,节约内存