爬取即刻的收藏

因为平时会收藏一些小黄图,但是即刻的登录机制对我比较费劲,好不容易找到网上的一片文章,但是看起来也比较费劲,大概知道怎么回事了以后,自己撸了一套

即刻的机制是这样的,用户扫码进入网站,然后会得到一个access_token,几分钟后会得到一个refresh_token,然后以后再登录的时候会调用一个接口,把refresh_token发给后台得到刷新的新的access_token。而第一个refresh_token需要去浏览器里复制出来。代码如下:

#将传入的refresh_token发给后台去获取
def refresh_token(refresh_token):
    user_agent = getUserAgent()#构造一个user_agent
    url = "https://app.jike.ruguoapp.com/app_auth_tokens.refresh"
    headers = {"Origin":"https://web.okjike.com",
               "Referer":"https://web.okjike.com/collection",
               "User-Agent":user_agent}
    headers["x-jike-refresh-token"] = str(refresh_token)
    r = requests.get(url,headers= headers)
    # print(r.text)
    content = r.text
    return content
复制代码

构造user_agent st是我自己的工具类

from tools import Tools as tl
from tools import Settings as st

def getUserAgent():
    agentList = st.user_agent_list
    random_num = random.randint(1,len(agentList))
    user_agent = agentList[random_num-1]
    return user_agent
复制代码

拿到最新的token之后就可以想干嘛干嘛了

if __name__ == '__main__':      
    access_token =  refresh_token('eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkYXRhIjoibjV0dVlqcVMrV0VVSDJKYTMwY0JYOTNcL1p4RTlqRExGTW1PZGRXcU9iaWZqOEZ3M3RrNjNNXC81enJsTUQ5ajNVMFVJRHZSNjlzYmhOWTBDejlQTXdXalwvSzBUcHRpRXJFMFZnXC9NSFVOYjFHaDVGajFzSEVKWm42TzR5aUk3XC9IaklrUENNeHNsSXRmNm1nVWdTUGZBbG1jZkNkdUdsblwvTGRvVGQ1UFJjQ3FNPSIsInYiOjMsIml2IjoiTWFQdTlpRUJqbUVcLzlIZURGdVVhZUE9PSIsImlhdCI6MTU1MzQwNzgwMS43NDl9.jWG-7-dUjZqSrgMJVnj1pIf52tqoSMHav_mop0_aABI')
    dic =  json.loads(access_token)
    startSpider('https://app.jike.ruguoapp.com/1.0/users/collections/list',dic['x-jike-access-token'])
复制代码

爬虫跑起来!

loadMoreKey = None#这个全局变量是用来跑分页的,即刻的分页需要传这个

def startSpider(url,access_token):
    user_agent = getUserAgent()
    headers = {"Accept":"application/json",
    "App-Version":"5.3.0",
    "Content-Type":"application/json",
    "Origin":"https://web.okjike.com",
    "platform":"web",
    "Referer":"https://web.okjike.com/collection",
    "User-Agent":user_agent}
    headers["x-jike-access-token"] = access_token
    tl.UsingHeaders = headers#这个是用来保存请求头的,用来在下载的时候保持请求头一致,可以去掉
    data = {'limit':20,'loadMoreKey':loadMoreKey}

    response = requests.post(url,headers= headers, data= json.dumps(data))
    response.enconding = "ascii"
    print(response.status_code)
    data = json.loads(response.content.decode("utf-8"))
    global loadMoreKey
    loadMoreKey = data['loadMoreKey']
    data_list = data['data']
    for dic in data_list:
        pictures = dic['pictures']
        for picDic in pictures:
            picurl = picDic['picUrl']
            tl.downLoadFile(picurl)
    print('------结束20记录------')
    startSpider(url,access_token)
复制代码
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值