源码下载:https://download.youkuaiyun.com/download/dabao87/11997988
首先搭建虚拟环境和安装python这里就不说了,不会的请移步我的其他文章
安装虚拟环境:https://blog.youkuaiyun.com/dabao87/article/details/102743386
创建一个项目,命令行:
scrapy startproject anjuke
这命令会建一个叫anjuke的文件夹,里面会有一些待你配置的文件
创建一个spider:
先进入刚才创建的项目文件夹里
cd anjuke
scrapy genspider anju shanghai.anjuke.com
这时的文件夹结构应该是这样的:
创建item
item是保存爬取数据的容器,使用方法和字典类似~
将item.py修改如下:
配置settings.py
在 settings.py 中配置几个重要的东西,由于安居库有防止爬虫访问,所以要模拟用户的正常访问
将 settings.py 文件中USER_AGENT参数打开
将刚才复制的 user-agent 放到 settings.py 的USER_AGENT参数中
这个打开
COOKIES_ENABLED = False
引入 import random
DOWNLOAD_DELAY = random.choice([1,2])
这个时候配置完成了,访问就可以模拟用户的正常访问了
在 anju.py 文件中的代码
# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from anjuke.items import AnjukeItem # 使用item
class AnjuSpider(scrapy.Spider):
name = 'anju'
allowed_domains = ['shanghai.anjuke.com']
start_urls = ['https://shanghai.anjuke.com/sale/baoshan/m2401/']
def parse(self, response):
divs = response.xpath('''//li[@class="list-item"]''') # 使用xpath从response中获取需要的html块
for div in divs:
item = AnjukeItem() # 实例化item对象
address = div.xpath('.//span[@class="comm-address"]/@title').extract_first() # 楼盘地址和小区名称,由于地址和小区名称在一起,所以下面是拆分
item['address'] = address[address.index("\xa0\xa0") + 2 :] #地址,以“\xa0\xa0”区分
item['name'] = address[:address.index("\xa0\xa0")] #小区名称
try:
item['type_'] = div.xpath('.//div[@class="details-item"]/span/text()').extract_first() # 房子类型比如两房一厅这样子~
except:
pass
# item['tags'] = div.xpath('.//span[@class="item-tags tag-metro"]/text()').extract() # 网站给楼盘定的标签~
price = div.xpath('.//span[@class="price-det"]/strong/text()').extract_first() # 价格
item['price'] = price + '万'
try:
item['area'] = div.xpath('.//div[@class="details-item"]/span/text()').extract()[1:2]
except:
pass
yield item
next_ = response.xpath('//div[@class="multi-page"]/a[@class="aNxt"]/@href').extract_first() # 获取下一页的链接
print('-------next----------')
print(next_)
yield response.follow(url=next_, callback=self.parse) # 将下一页的链接加入爬取队列~~
执行:
scrapy crawl anju
这里名字是anju,所以上面的命令要是 anju
结果:
安居客网站的标签和class可能会改变。不要直接复制我的代码,可能会不能执行
存入mysql
参考:https://www.cnblogs.com/ywjfx/p/11102081.html
新建数据库
scrapy
新建表
anjuke
在 pipelines.py 文件中
import pymysql
class AnjukePipeline(object):
def __init__(self):
# 连接MySQL数据库
self.connect=pymysql.connect(host='127.0.0.1',user='root',password='111111',db='scrapy',port=3306)
self.cursor=self.connect.cursor()
def process_item(self, item, spider):
# 往数据库里面写入数据
self.cursor.execute('insert into anjuke(address,name,type_,area,price)VALUES ("{}","{}","{}","{}","{}")'.format(item['address'],item['name'],item['type_'],item['area'],item['price']))
self.connect.commit()
return item
# 关闭数据库
def close_spider(self,spider):
self.cursor.close()
self.connect.close()
在settings.py中
主要是将 ITEM_PIPELINES 的注释去掉
# -*- coding: utf-8 -*-
# Scrapy settings for tencent project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'tencent'
SPIDER_MODULES = ['tencent.spiders']
NEWSPIDER_MODULE = 'tencent.spiders'
LOG_LEVEL="WARNING"
LOG_FILE="./qq.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'
# Obey robots.txt rules
#ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tencent.middlewares.TencentSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'tencent.middlewares.TencentDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'tencent.pipelines.TencentPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
# 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'
下面要注意:
查看有没有安装pymysql
pip list
我已经安装好了,但是在爬取提示:ModuleNotFoundError: No module named 'pymysql'
这个时候要注意是否是在安装python中安装的
执行
where python
我有两个安装目录:一个是安装的python,一个是虚拟环境。刚才我在虚拟换in中安装了pymysql,所以提示没有pymysql模块,
这个时候我切换到俺咋混个python的环境中,再次执行
pip list
发现没有pymysql这个模块,赶紧装上
注意:一定要在Scripts这个文件下安装
执行
pip install pymysql
再次查看他有没有安装上 pymysql
pip list
这个时候应该是安装好了
在回到刚才爬虫的项目中,执行爬虫
scrapy crawl anju
这时就已经可以存到数据库了