python中注意空格和tab键的使用

本文分享了使用Scrapy框架进行爬虫开发时遇到的问题,特别是关于如何正确处理URL中的空格,避免爬虫请求失败。通过对比错误与正确代码,详细解释了问题所在及解决方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

最近在学习scrapy框架中写爬虫,被坑爹的空格键和tab键给害惨了。上代码:

错误代码:

import scrapy
import re


class GithubSpider(scrapy.Spider):
    name = 'Github'
    allowed_domains = ['github.com']
    start_urls = ['https://github.com/login']

    def parse(self, response):
        authenticity_token = response.xpath("//input[@name='authenticity_token']/@value").extract_first()
        utf8 = response.xpath("//input[@name='utf8']/@value").extract_first()
        commit = response.xpath("//input[@name='commit']/@value").extract_first()
        post_data = dict(
                login="noobpythoner",
                password="zhoudawei123",
                authenticity_token=authenticity_token,
                utf8=utf8,
                commit=commit
        )
        yield scrapy.FormRequest(
            "https: //  github.com  /   session",
            formdata=post_data,
            callback=self.after_login
        )

    def after_login(self, response):
        # with open("./a.html", "wb", encoding="utf-8") as file:
        #     file.write(response.body.decode())
        print(re.findall("noobpythoner|NoobPythoner", response.body.decode()))
        # print(response.body.decode())

上述代码错误之处在于: scrapy.FormRequest()函数的第一个参数,这是直接从浏览器复制过来的Url地址,正是由于url地址中的空格导致了请求不成功,返回结果没有任何数据:

正确的代码:

import scrapy
import re


class GithubSpider(scrapy.Spider):
    name = 'Github'
    allowed_domains = ['github.com']
    start_urls = ['https://github.com/login']

    def parse(self, response):
        authenticity_token = response.xpath("//input[@name='authenticity_token']/@value").extract_first()
        utf8 = response.xpath("//input[@name='utf8']/@value").extract_first()
        commit = response.xpath("//input[@name='commit']/@value").extract_first()
        post_data = dict(
                login="noobpythoner",
                password="zhoudawei123",
                authenticity_token=authenticity_token,
                utf8=utf8,
                commit=commit
        )
        yield scrapy.FormRequest(
            "https://github.com/session",
            formdata=post_data,
            callback=self.after_login
        )

    def after_login(self, response):
        # with open("./a.html", "wb", encoding="utf-8") as file:
        #     file.write(response.body.decode())
        print(re.findall("noobpythoner|NoobPythoner", response.body.decode()))
        # print(response.body.decode())

这下输出结果如下:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值