美团1-8类人气榜---套餐榜数据

该博客展示了一个Python爬虫程序,用于抓取美团网站上的套餐榜单数据。程序通过更换城市代码来获取不同城市的销售排行榜,解析JSON响应并提取关键信息,如套餐ID、名称、周销量等,并将数据存储到CSV文件中。爬虫采用了requests库进行HTTP请求,lxml库解析HTML,以及csv库写入数据。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


# -*- coding:utf-8 -*-
# 仅需修改这个地方https://jn.lianjia.com/ershoufang/pg{}rs/   将jn换成你所在城市的拼写首字母小写
import requests
from lxml import etree
import time
import random
import csv
import requests
import json

##################url中boardtype 9表示套餐榜,boardtyp8表示商家榜,但json中的内容可能不一样

class LianjiaSpider(object):
    def __init__(self):
        self.url = "https://mobilenext-web.meituan.com/api/newSalesBoard/getSaleBoardDetail?cityId=96&boardType=9&districtId=0&cateId={}&offset=0&limit=15&lat=36.526046191159445&lng=122.062217811"

        self.headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.163 Safari/535.1"}

    def get_page(self, url):
        res = requests.get(url=url, headers=self.headers)
        res.encoding = "utf-8"
        html = res.text
       # print(html)



      #  results_temp = html.replace('{"totalSize":50,"saleBoardDealList":', "").replace("}}]}", "")
        #results = results_temp + "}}]"
      #  print(results)




        self.parse_page(html)
       #print(html)
       # print(i)
    def parse_page(self,html):
      #  results_temp = html.replace('{"totalSize":50,"saleBoardDealList":', "").replace("}}]}", "")
        #results = results_temp + "}}]"
        results = html[36:-1]

    #    print(results)
        for list in json.loads(results):

            print(list)
            id = list["id"]
            name = list["name"]
            weekSaleCount = list["weekSaleCount"]
            frontImg = list["frontImg"]
            dishes = list["dishes"]
            price = list["price"]
            value = list["value"]
            discount = list["discount"]
            recommendDish = list["recommendDish"]
            rank = list["rank"]
            saleBoardDealPoi = list["saleBoardDealPoi"]
            saleBoardDealPoi_name = list["saleBoardDealPoi"]["name"]
            with open('meituan.csv', 'a', newline='', encoding='utf-8')as f:
                write = csv.writer(f)
                write.writerow([id,name,weekSaleCount,frontImg,dishes,price,value,discount,recommendDish,rank,saleBoardDealPoi,saleBoardDealPoi_name])
                f.close()





    def main(self):
        for i in range(1, 8):
            time.sleep(random.randint(3, 5))
            url = self.url.format(i)
            print(url)
            self.get_page(url)
           # print(j)



if __name__ == '__main__':
    start = time.time()
    spider = LianjiaSpider()
    spider.main()
    end = time.time()
    print("执行时间:%.2f" % (end - start))
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值