实习僧网站爬取

遇到的问题:网站设置了简单的反爬虫规则:数字防爬,如:&#xf5e2这样的。

解决方法:直接获取0-9的编码加入字典以此替换。

代码如下:

import requests,re,time,xlwt
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
}
end_list = []
replace_dict={
    "&#xf770":"0",
    "&#xf5fa":"1",
    "&#xf451":"2",
    "&#xe939":"3",
    "&#xede7":"4",
    "&#xf328":"5",
    "&#xed99":"6",
    "&#xf03b":"7",
    "&#xe9d2":"8",
    "&#xf5e2":"9"}
def get_links(url):
    wb_data = requests.get(url,headers=headers)
    wb_data.encoding=wb_data.apparent_encoding
    links = re.findall('class="name-box clearfix".*?href="(.*?)"',wb_data.text,re.S)
    for link in links:
        get_infos('https://www.shixiseng.com'+link)
def get_infos(url):
    wb_data = requests.get(url,headers=headers)
    wb_data.encoding=wb_data.apparent_encoding
    salarys = re.findall('class="job_money cutom_font">(.*?)</span>',wb_data.text,re.S)
    addresses = re.findall('class="job_position">(.*?)</span>',wb_data.text,re.S)
    educations = re.findall('class="job_academic">(.*?)</span>',wb_data.text,re.S)
    jobways = re.findall('class="job_week cutom_font">(.*?)</span>',wb_data.text,re.S)
    months = re.findall('class="job_time cutom_font">(.*?)</span>',wb_data.text,re.S)
    jobgoods = re.findall('class="job_good".*?>(.*?)</div>',wb_data.text,re.S)
    contents = re.findall(r'div class="job_til">([\s\S]*?)<div class="job_til">', wb_data.text, re.S)[0].replace(' ','').replace('\n', '').replace('&nbsp;', '')
    contents = re.sub(r'<[\s\S]*?>', "", str(contents))
    #requires = re.findall(r'class="job_detail".*?>font-size:14px;>([\s\S]*?)</span>',wb_data.text,re.S)
    for salary,address,education,jobway,month,jobgood in zip(salarys,addresses,educations,jobways,months,jobgoods):
        for key, vaule in replace_dict.items():
            salary = salary.replace(key, vaule)
            jobway = jobway.replace(key,vaule)
            month = month.replace(key,vaule)
        list=[url,salary,address,education,jobway,month,jobgood,contents]
    end_list.append(list)
if __name__ == '__main__':
    try:
        urls = ['https://www.shixiseng.com/it/{}'.format(str(i))
                for i in range(1,10)]
        q = 1
        for url in urls:
            print('正在打印第%d页'%q)
            q+=1
            get_links(url)
            time.sleep(3)
        book = xlwt.Workbook(encoding='utf-8')
        sheet = book.add_sheet('newjobmessage')
        header = ['网址','日薪','地址','学历','上班要求','实习期','福利','要求']
        for h in range(len(header)):
            sheet.write(0,h,header[h])
        i = 1
        for list in end_list:
            j = 0
            for data in list:
                sheet.write(i,j,data)
                j+=1
            i+=1
        book.save('123.xls')
    except:
        print('endprocess')

效果图:

转载于:https://www.cnblogs.com/mayunji/p/8779016.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值