Scrapy连接到MySQL

本文详细介绍如何在Scrapy项目中设置pipelines.py文件,通过使用pymysql模块将爬虫抓取的数据存储到MySQL数据库中。文章提供了完整的代码示例,包括数据库连接、数据插入及异常处理等关键步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Scrapy连接到MySQL

修改pipelines.py文件加入如下代码

# 爬取到的数据写入到MySQL数据库
import pymysql
class MySQLPipeline(object):

    # 打开数据库
    def open_spider(self, spider):
        db = spider.settings.get('MYSQL_DB_NAME','scrapy_db')
        host = spider.settings.get('MYSQL_HOST', 'localhost')
        port = spider.settings.get('MYSQL_PORT', 3306)
        user = spider.settings.get('MYSQL_USER', 'root')
        passwd = spider.settings.get('MYSQL_PASSWORD', '123456')

        self.db_conn =pymysql.connect(host=host, port=port, db=db, user=user, passwd=passwd, charset='utf8')
        self.db_cur = self.db_conn.cursor()

    # 关闭数据库
    def close_spider(self, spider):
        self.db_conn.commit()
        self.db_conn.close()

    # 对数据进行处理
    def process_item(self, item, spider):
        self.insert_db(item)
        return item

    #插入数据
    def insert_db(self, item):
        values = (
            item['upc'],
            item['name'],
            item['price'],
            item['review_rating'],
            item['review_num'],
            item['stock'],
        )
        try:
            sql = 'INSERT INTO books VALUES(%s,%s,%s,%s,%s,%s)'
            self.db_cur.execute(sql, values)
            self.db_conn.commit()
            print("Insert finished")
        except:
            print("Insert to DB failed")
            self.db_conn.commit()
            self.db_conn.close()      

详细可参考:

Scrapy连接到各类数据库(SQLite,Mysql,Mongodb,Redis)

可以使用以下代码连接ScrapyMySQL: 1. 首先,需要在Scrapy项目的settings.py文件中添加以下代码: ``` ITEM_PIPELINES = { 'myproject.pipelines.MySQLPipeline': 300, } MYSQL_HOST = 'localhost' MYSQL_DBNAME = 'mydatabase' MYSQL_USER = 'myusername' MYSQL_PASSWORD = 'mypassword' ``` 2. 然后,在Scrapy项目的pipelines.py文件中添加以下代码: ``` import pymysql class MySQLPipeline(object): def __init__(self, host, dbname, user, password): self.host = host self.dbname = dbname self.user = user self.password = password @classmethod def from_crawler(cls, crawler): return cls( host=crawler.settings.get('MYSQL_HOST'), dbname=crawler.settings.get('MYSQL_DBNAME'), user=crawler.settings.get('MYSQL_USER'), password=crawler.settings.get('MYSQL_PASSWORD') ) def open_spider(self, spider): self.conn = pymysql.connect( host=self.host, user=self.user, password=self.password, db=self.dbname, charset='utf8mb4', cursorclass=pymysql.cursors.DictCursor ) def close_spider(self, spider): self.conn.close() def process_item(self, item, spider): with self.conn.cursor() as cursor: sql = "INSERT INTO mytable (column1, column2, column3) VALUES (%s, %s, %s)" cursor.execute(sql, (item['column1'], item['column2'], item['column3'])) self.conn.commit() return item ``` 3. 最后,在Scrapy项目的items.py文件中定义你的item: ``` import scrapy class MyItem(scrapy.Item): column1 = scrapy.Field() column2 = scrapy.Field() column3 = scrapy.Field() ``` 这样,当你运行Scrapy爬虫时,它会将数据存储到MySQL数据库中。
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值