Java网络爬虫(十一)--重构定时爬取以及IP代理池(多线程+Redis+代码优化)

本文介绍了一种利用多线程技术构建高效的IP代理池的方法,包括从代理网站抓取IP、筛选过滤、质量检测及存储到Redis等步骤,并分享了实战代码。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一直觉得自己之前写的使用定时抓取构建IP代理池实在过于简陋,并且有一部分的代码写的并不合理,刚好最近又在学习多线程,就将之前的代码进行了重构,也方便对抓取代理ip有需求的人。之前自己写的那篇文章就不删除了,里面用到了MySQL以及循环调用ip的方法(一些东西也是值得了解的。取其精华,弃其糟粕吧),大家有兴趣的可以看一下(最主要的还是不舍得访问量,哈哈)。

注:由于xici代理网的ip代理并不是很稳定,所以自己实现的ip代理池并不可用(4000个IP最后通过一系列逻辑处理和过滤之后大概只剩30多个,并且这30个ip也极不稳定),但感觉实现ip代理池的原理就是这样,在往后估计也就是性能上的优化。虽然ip代理池并不可用,但是如果你想要学习多线程以及抓取ip做爬虫的话,这篇文章我觉得是可以给你一些思路上的启发。


怎么设计一个IP代理池

其实设计一个IP代理池是非常容易的一件事情,我们来看一下其中的步骤:

1.首先肯定是从提供代理ip的网站上对ip进行抓取

2.对抓取下来的ip进行初步的过滤,比如我一开始就将IP类型不是HTTPS并且IP链接速度大于2秒的给过滤掉了

3.对于符合我们要求的IP,我们要对其进行质量检测,判断其是否可用,这一步就是检测IP的质量,也就是这一步刷掉了大量的IP

4.将符合要求的ip写进Redis数据库中,我是以List形式存储在Redis中

5.设定一个进行抓取的周期,来更新你的IP代理池(在将新的IP抓取下来并进行处理之后,我们将原数据库清空,并将新的IP写入其中)

第五个我是直接每次先清空数据库,然后在进行新IP的抓取以及过滤然后才写入数据库中,这样明显是不合理的,因为它会造成你的IP代理池有很长一段时间是空缺的,这也是我在写这篇博客的时候才想到的一个不合理的地方(太懒,代码也就不改了,大家知道问题就行)。


整体架构

这里写图片描述


实现细节

HttpResponse的阻塞

在我们用代理IP进行网页抓取的时候,经常会发生长时间的阻塞然后程序报错(具体原因不祥,如果你真的有兴趣弄明白,建议看源码),这个时候我们只要设定好链接时间并进行恰当的try,catch就可以解决这个问题。

如下面代码:

/**
 * setConnectTimeout:设置连接超时时间,单位毫秒.
 * setConnectionRequestTimeout:设置从connect Manager获取Connection 超时时间,单位毫秒.
 * 这个属性是新加的属性,因为目前版本是可以共享连接池的.
 * setSocketTimeout:请求获取数据的超时时间,单位毫秒.如果访问一个接口,多少时间内无法返回数据,
 * 就直接放弃此次调用。
 */

HttpHost proxy = new HttpHost(ip, Integer.parseInt(port));
RequestConfig config = RequestConfig.custom().setProxy(proxy).setConnectTimeout(3000).setSocketTimeout(3000).build();
HttpGet httpGet = new HttpGet(url);
httpGet.setConfig(config);
try {
    //客户端执行httpGet方法,返回响应
    CloseableHttpResponse httpResponse = httpClient.execute(httpGet);

    //得到服务响应状态码
    if (httpResponse.getStatusLine().getStatusCode() == 200) {
        entity = EntityUtils.toString(httpResponse.getEntity(), "utf-8");
    }

    httpResponse.close();
    httpClient.close();
    } catch (ClientProtocolException e) {
        entity = null;
    } catch (IOException e) {
        entity = null;
    }

    return entity;
}

线程类的实现

多线程需要注意的就是一点:线程安全,还有就是保证线程同步而不发生脏读,这些东西需要结合整体代码逻辑去把握,我就在这里不细说了。考虑大家有可能产生疑惑,所以我在代码中进行了详细的注释,大家有兴趣的可以在我github上查看源码,有不懂的或有疑惑的同学欢迎在评论区提问与讨论。

我贴一下提供多线程抓取代理ip的服务类:

public class IPPool {
    //成员变量(非线程安全)
    private List<IPMessage> ipMessages;

    public IPPool(List<IPMessage> ipMessages) {
        this.ipMessages = ipMessages;
    }

    public void getIP(List<String> urls) {
        String ipAddress;
        String ipPort;

        for (int i = 0; i < urls.size(); i++) {
            //随机挑选代理IP(仔细想了想,本步骤由于其他线程有可能在位置确定之后对ipMessages数量进行增加,虽说不会改变已经选择的ip代理的位置,但合情合理还是在对共享变量进行读写的时候要保证其原子性,否则极易发生脏读)

            //每个线程先将自己抓取下来的ip保存下来并进行过滤与检测
            List<IPMessage> ipMessages1 = new ArrayList<>();
            String url = urls.get(i);

            synchronized (ipMessages) {
                int rand = (int) (Math.random()*ipMessages.size());
                out.println("当前线程 " + Thread.currentThread().getName() + " rand值: " + rand + " ipMessages 大小: " + ipMessages.size());
                ipAddress = ipMessages.get(rand).getIPAddress();
                ipPort = ipMessages.get(rand).getIPPort();
            }

            //这里要注意Java中非基本类型的参数传递方式,实际上都是同一个对象
            boolean status = URLFecter.urlParse(url, ipAddress, ipPort, ipMessages1);
            //如果ip代理池里面的ip不能用,则切换下一个IP对本页进行重新抓取
            if (status == false) {
                i--;
                continue;
            } else {
                out.println("线程:" + Thread.currentThread().getName() + "已成功抓取 " +
                url + " ipMessage1:" + ipMessages1.size());
            }

            //对ip重新进行过滤,只要速度在两秒以内的并且类型为HTTPS的
            ipMessages1 = IPFilter.Filter(ipMessages1);

            //对ip进行质量检测,将质量不合格的ip在List里进行删除
            IPUtils.IPIsable(ipMessages1);

            //将质量合格的ip合并到共享变量ipMessages中,进行合并的时候保证原子性
            synchronized (ipMessages) {
                out.println("线程" + Thread.currentThread().getName() + "已进入合并区 " + "待合并大小 ipMessages1:" + ipMessages1.size());
                ipMessages.addAll(ipMessages1);
            }
        }
    }
}

将IPMessages(保存IP信息的对象)写入Redis中

由于Redis只支持字符串与字节流数据,所以我们要想将一个对象存储到Redis的List中,则必须将对象序列化(转换成字节流),反之,如果我们想要将Redis中的数据拿出来,就要反序列化。关于序列化与反序列化的知识不懂的大家百度,我就不细说了。来看一下这部分的代码:

序列化以及反序列化(Java中好像有现成的方法,我没有具体了解):

/**
 * Created by hg_yi on 17-8-9.
 *
 * java.io.ObjectOutputStream代表对象输出流,它的writeObject(Object obj)方法
 * 可对参数指定的obj对象进行序列化,把得到的字节序列写到一个目标输出流中。
 *
 * java.io.ObjectInputStream代表对象输入流,它的readObject()方法一个源输入流中读
 * 取字节序列,再把它们反序列化为一个对象,并将其返回。
 *
 * 对象序列化包括如下步骤:
 * 1)创建一个对象输出流,它可以包装一个其他类型的目标输出流,如文件输出流(我这里是字节流);
 * 2)通过对象输出流的writeObject()方法写对象。
 *
 * 对象反序列化的步骤如下:
 * 1)创建一个对象输入流,它可以包装一个其他类型的源输入流,如文件输入流(我这里是字节流);
 * 2)通过对象输入流的readObject()方法读取对象。
 */

public class SerializeUtil {
    public static byte[] serialize(Object object) {
        ObjectOutputStream oos;
        ByteArrayOutputStream baos;

        try {
            // 序列化
            baos = new ByteArrayOutputStream();
            oos = new ObjectOutputStream(baos);
            oos.writeObject(object);

            byte[] bytes = baos.toByteArray();

            return bytes;
        } catch (Exception e) {
            e.printStackTrace();
        }
        return null;
    }

    //反序列化
    public static Object unserialize(byte[] bytes) {
        ByteArrayInputStream bais;
        ObjectInputStream ois;

        try {
            // 反序列化
            bais = new ByteArrayInputStream(bytes);
            ois = new ObjectInputStream(bais);

            return ois.readObject();
        } catch (Exception e) {
            e.printStackTrace();
        }

        return null;
    }
}

最后,定时任务的实现我没有再用第一篇博客里面讲的quartz,代码太多并且不好理解。当时太年轻,还不知道Timer这个Java自带的产生定时任务的类,它的实现相当于是开了一个单独的线程去帮助执行我们所设定的任务,它的用法我就不细说了,源码里有,非常方便,并且代码量也很少。

好了,废话说了这么多,贴上大家想要的东西:

戳我获得源码哦~

import redis import requests import time import json from datetime import datetime, timedelta import os import math import threading from concurrent.futures import ThreadPoolExecutor import pytz from queue import Queue from fastapi import FastAPI, HTTPException from pydantic import BaseModel from typing import Dict, List # FastAPI应用 app = FastAPI(title="京东评论爬虫API", description="多线程爬取京东评论数据并提供查询接口") # 数据存储 crawled_data: Dict[str, List[dict]] = {} data_lock = threading.Lock() # 配置部分 BASE_URL = "https://pjsj.jddj.com/jd/sku-comment/query-page" ACCOUNTS = [ {"name": "钟红军", "cookie_file": "cookie.json", "current_page": 1}, # {"name": "account2", "cookie_file": "cookie2.json", "current_page": 1} ] THREAD_POOL_SIZE = len(ACCOUNTS) task_queue = Queue() REDIS_HOST = 'localhost' REDIS_PORT = 6379 REDIS_DB = 0 PASSWORD = '123456' redis_client = redis.StrictRedis(host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB, password=PASSWORD) # 数据模型 class CommentItem(BaseModel): id: str content: str create_time: str score: int class CrawlResponse(BaseModel): status: str message: str data_count: int # 辅助函数 def load_cookies_from_json(cookie_file): """从json文件加载cookies""" if not os.path.exists(cookie_file): print(f"警告: cookie文件 {cookie_file} 不存在") return None try: with open(cookie_file, 'r', encoding='utf-8') as f: return json.load(f) except Exception as e: print(f"读取cookie文件 {cookie_file} 失败: {e}") return None # def save_crawled_data(account_name: str, data: dict): # """保存爬取数据""" # with data_lock: # if account_name not in crawled_data: # crawled_data[account_name] = [] # crawled_data[account_name].extend(data['data']['resultList']) # save_to_redis(crawled_data) def make_request(account): """发送请求""" with data_lock: page = account['current_page'] now_time = time.strftime('%Y-%m-%d', time.localtime()) params = { "pageNo": str(page), "pageSize": "100", # "storeId": "15506231", # 店铺ID "startTime": now_time + " 00:00:00", "endTime": now_time + " 23:59:59" } cookies = load_cookies_from_json(account['cookie_file']) headers = { "authority": "pjsj.jddj.com", "accept": "application/json, text/javascript, */*; q=0.01", "accept-language": "zh-CN,zh;q=0.9", # "cookie": "3AB9D23F7A4B3C9B=IR5JY5HVIONIPRVZ2XT6IADLJNT7YVYDN7KSOUASGPRJ7NAJBFGHWLVPMEZ2I7HO7ZXYPRPK2XOFRADO3VURDNIBOE; __jda=157302241.1748326760578484526793.1748326760.1748326760.1748326760.1; __jdv=157302241%7Cdirect%7C-%7Cnone%7C-%7C1748326760578; __jdc=157302241; mba_muid=1748326760578484526793; thor=D6CAE49174A1C2B1BFC448D24F0B356D54E1C3A1A62EF708482E83C31DB30F7A0D6E35AF8C74BE68709840C619A79DC59520F022F1C537DAA53D977DF761A73AF6D0083C726CFB395271DAF4D7C482A5302D1A77A17CC9F1F92A7E8185D119635AEEE05CAEC366757DA9A1C06725F830BA2A0C7714B045132D6DD372E3E517D3AEE9FACFD309A6117551CCEB6637A265; light_key=AASBKE7rOxgWQziEhC_QY6ya7oMtArydscb7k_M0HOxtksgRmje1Wh6GnZii5TXmpvFkJWd90SeBVGnpH3MQd0yRl-dW4Q; pin=Z%E4%BA%BA%E6%B0%91%E5%85%AC%E7%A4%BE%E8%80%81%E9%95%BF%E6%B2%99%E6%B9%98%E8%8F%9C; unick=rws31804m68239; lsp-store1.jd.local=3LXOEL6XD5JIX2ARP727WV7XV3ZJJBFMFQM4QPQK4H3WA2NN3LHO25Y5VAEJWZNLF36EPE6YY4JPPMMIB2L4FZ2RA4DC3OQPSKMMG66XKJFZ2ZYYEXN5AIFEQPR53P6LNPKVJFYG2NELNYAVTSBL4E4AOEJO6XA7ZMRDGFJN5BWH6LYPLWP5TYSTLFGS22D2QHJR7UY5QARLLLYGPVZE3ZSOVVVGWJGOHBIA4FKICSWUIXP23IOYXPXML3VQVF37FSJNEJT2P2VSI55JHJZ3KDXSDDW6OVYC77QPQ7MZ4KG54G65X6VYENKNHCEIMBE7ZVIFPMJ57IIVHYH5SFY7PNDXLTNWSBZRMWYMPBUMMXN7PZJO6475BIUZOPGMLVVRQIT4G54OHGJRQ3KKZCJZSNV57774F5ON4BJLKTBOH3OOMGVYLKSHTIARPJSBRTFPC7PUEGQWZW3YUSHGPF7V2T4TIAWASZJKTM7NJMA; user_email=Z%E4%BA%BA%E6%B0%91%E5%85%AC%E7%A4%BE%E8%80%81%E9%95%BF%E6%B2%99%E6%B9%98%E8%8F%9C; __jdb=157302241.3.1748326760578484526793|1.1748326760; 3AB9D23F7A4B3CSS=jdd03IR5JY5HVIONIPRVZ2XT6IADLJNT7YVYDN7KSOUASGPRJ7NAJBFGHWLVPMEZ2I7HO7ZXYPRPK2XOFRADO3VURDNIBOEAAAAMXCBR6FEAAAAAADIUPN47E6AWWHQX; josl-privilege1.jddj.com=3LXOEL6XD5JIX2ARP727WV7XV3ZJJBFMFQM4QPQK4H3WA2NN3LHO25Y5VAEJWZNLF36EPE6YY4JPPMMIB2L4FZ2RA4DC3OQPSKMMG66XKJFZ2ZYYEXN5AIFEQPR53P6LNPKVJFYG2NELNYAVTSBL4E4AOEJO6XA7ZMRDGFJN5BWH6LYPLWP5TYSTLFGS22D2QHJR7UY5QARLLLYGPVZE3ZSOVVVGWJGOHBIA4FPL4IDBM5RFMZT3TMHHZDAW273UDNJ6QCVCBDWVWMXLIQZE34ONYTW6OVYC77QPQ7MZ4KG54G65X6VYENKNHCEIMBE7ZVIFPMJ57IIVHYH5SFY7PNDXLTNWSBZRMWYMPBUMMXN7PZJO6475BIUZOPGMLVVRQIT4G54OHGJRQ3KKZCJZSNV57774F5ON4BJLKTBOH3OOMGVYLKSHTIARPJSBRTFPC7PUEGQWZW3YVENLP3724YREZAYCRBOXRPBOSII; josl-privilege1.jd.local=3LXOEL6XD5JIX2ARP727WV7XV3ZJJBFMFQM4QPQK4H3WA2NN3LHO25Y5VAEJWZNLF36EPE6YY4JPPMMIB2L4FZ2RA4DC3OQPSKMMG66XKJFZ2ZYYEXN5AIFEQPR53P6LNPKVJFYG2NELNYAVTSBL4E4AOEJO6XA7ZMRDGFJN5BWH6LYPLWP5TYSTLFGS22D2QHJR7UY5QARLLLYGPVZE3ZSOVVVGWJGOHBIA4FPL4IDBM5RFMZT3TMHHZDAW273UDNJ6QCVCBDWVWMXLIQZE34ONYTW6OVYC77QPQ7MZ4KG54G65X6VYENKNHCEIMBE7ZVIFPMJ57IIVHYH5SFY7PNDXLTNWSBZRMWYMPBUMMXN7PZJO6475BIUZOPGMLVVRQIT4G54OHGJRQ3KKZCJZSNV57774F5ON4BJLKTBOH3OOMGVYLKSHTIARPJSBRTFPC7PUEGQWZW3YVENLP3724YREZAYCRBOXRPBOSII; mba_sid=17483267605791887488231.1; flash=3_H47SrMyh5_73S8ThAeBvyL_lWo0CFt7RdspWOBxRRgQRi1yN5nLjFba5-pkiQMOvtspoou3_yPDtW-t98169gqz6ZbTb6eU7Z9sYqp1zwI3zpENNrIPiyWxu_yKIkieYvjZvqYSn9FQcGPVL5j8OR9Lr4n8HWnZ5RUvFbdaQcsAzkV**; __jd_ref_cls=takeoutHome_Menu; shop.o2o.jd.com1=3LXOEL6XD5JIX2ARP727WV7XV3ZJJBFMFQM4QPQK4H3WA2NN3LHO25Y5VAEJWZNLF36EPE6YY4JPPMMIB2L4FZ2RA4DC3OQPSKMMG66XKJFZ2ZYYEXN5AIFEQPR53P6LNPKVJFYG2NELNYAVTSBL4E4AOEJO6XA7ZMRDGFJN5BWH6LYPLWP5TYSTLFGS22D2QHJR7UY5QARLLLYGPVZE3ZSOVVVGWJGOHBIA4FKICSWUIXP23IOYXPXML3VQVF37FSJNEJT2P2VSI55JHJZ3KDXSDDW6OVYC77QPQ7MZ4KG54G65X6V6JNKWU65W4AHPTE6S5COCIPJISHSUR6NLDORGUPSNJKM2ZNQMFOU2SFQ6F6WUNO5GXDJSVUELLBJT; lsp-store1.jddj.com=3LXOEL6XD5JIX2ARP727WV7XV3ZJJBFMFQM4QPQK4H3WA2NN3LHO25Y5VAEJWZNLF36EPE6YY4JPPMMIB2L4FZ2RA4DC3OQPSKMMG66XKJFZ2ZYYEXN5AIFEQPR53P6LNPKVJFYG2NELNYAVTSBL4E4AOEJO6XA7ZMRDGFJN5BWH6LYPLWP5TYSTLFGS22D2QHJR7UY5QARLLLYGPVZE3ZSOVVVGWJGOHBIA4FKICSWUIXP23IOYXPXML3VQVF37FSJNEJT2P2VSI55JHJZ3KDXSDDW6OVYC77QPQ7MZ4KG54G65X6V6JNKWU65W4AHPTE6S5COCIPJISHSUR6NLDORGUPSNJKM2ZNQMFOU2SFQ6F6WUNO5GXDJSVUELLBJT; user_key=2474644; vender_id=1512295; vender_name=\\u4EBA\\u6C11\\u516C\\u793E\\u00B7\\u8001\\u957F\\u6C99\\u6E58\\u83DC", "referer": "https://pjsj.jddj.com/resource/web/html/jdComment.html", "sec-ch-ua": "\"Chromium\";v=\"106\", \"Microsoft Edge\";v=\"106\", \"Not;A=Brand\";v=\"99\"", "sec-ch-ua-mobile": "?0", "sec-ch-ua-platform": "\"Windows\"", "sec-fetch-dest": "empty", "sec-fetch-mode": "cors", "sec-fetch-site": "same-origin", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36 Edg/106.0.1370.47", "x-requested-with": "XMLHttpRequest" } try: print(f"账号 {account['name']} 正在请求第 {page} 页...") response = requests.get( BASE_URL, params=params, headers=headers, cookies=cookies, timeout=10 ) response.raise_for_status() data = response.json() # save_crawled_data(account['name'], data) # print(data) return account, data except Exception as e: print(f"账号 {account['name']} 请求失败: {e}") return account, None def process_data(account, data): """处理数据""" value = data['data']['totalCount'] print(f"账号 {account['name']} 当前有: {value}条数据") page = math.ceil(value / 100) page = page if page >= 0 else 0 print(f"账号 {account['name']} 需要翻: {page}页") if value > 100: with data_lock: account['current_page'] += 1 if account['current_page'] <= page: print(f"账号 {account['name']} 获取第{account['current_page']}页的数据") return False # 需要重新请求 return True # 处理完成 def save_to_redis(data): """将数据保存到redis""" redis_key = f"crawler_data:{data['guid']}" try: redis_client.set(redis_key, json.dumps(data)) print(f"数据已保存到redis: {redis_key}") except Exception as e: print(f"保存到redis失败: {e}") def data_analysis(data): """数据分析""" data_list = data['data']['resultList'] if not data_list: print('暂时没有评论数据') return data_list def worker(): """工作线程函数""" while True: account = task_queue.get() try: account, data = make_request(account) if data is not None: data_list = data_analysis(data) # 单次数据存储redis for datas in data_list: save_to_redis(datas) if data is not None and not process_data(account, data): task_queue.put(account) # 需要重新请求 except Exception as e: print(f"账号 {account['name']} 处理异常: {e}") finally: task_queue.task_done() def is_in_special_period(): """判断是否在特殊时段""" tz = pytz.timezone('Asia/Shanghai') hour = datetime.now(tz).hour return (11 <= hour < 14) or (17 <= hour < 20) def calculate_next_run(): """计算下次运行时间""" now = datetime.now() if is_in_special_period(): next_run = (now + timedelta(minutes=30)).replace(second=0, microsecond=0) if now.minute >= 30: next_run = next_run.replace(minute=0) + timedelta(hours=1) else: next_run = (now + timedelta(hours=1)).replace(minute=0, second=0, microsecond=0) return next_run def wait_until_next_run(next_run): """等待到下次运行时间""" wait_seconds = (next_run - datetime.now()).total_seconds() if wait_seconds > 0: print(f"等待 {wait_seconds:.1f} 秒直到下次运行...") time.sleep(wait_seconds) def start_scheduler(): """启动调度器""" with ThreadPoolExecutor(max_workers=THREAD_POOL_SIZE) as executor: for _ in range(THREAD_POOL_SIZE): executor.submit(worker) while True: next_run = calculate_next_run() print(f"\n开始新一轮请求,时间: {datetime.now()}") for account in ACCOUNTS: task_queue.put(account.copy()) task_queue.join() wait_until_next_run(next_run) # API端点 @app.get("/comments/{account_name}", response_model=List[CommentItem]) async def get_comments(account_name: str, limit: int = 10): """获取指定账号的评论数据""" with data_lock: if account_name not in crawled_data: raise HTTPException(status_code=404, detail="Account not found") return crawled_data[account_name][:limit] @app.get("/start_crawl", response_model=CrawlResponse) async def start_crawl(): """手动触发爬取任务""" for account in ACCOUNTS: task_queue.put(account.copy()) return { "status": "success", "message": "Crawl tasks added to queue", "data_count": sum(len(v) for v in crawled_data.values()) } @app.get("/stats") async def get_stats(): """获取爬取统计信息""" with data_lock: return { "accounts": [a['name'] for a in ACCOUNTS], "data_counts": {k: len(v) for k, v in crawled_data.items()}, "last_updated": datetime.now().isoformat() } # 启动爬虫线程 @app.on_event("startup") async def startup_event(): """应用启动时初始化""" # 检查账号配置 if not ACCOUNTS: raise RuntimeError("未配置任何账号!") for account in ACCOUNTS: if not os.path.exists(account['cookie_file']): print(f"警告: 账号 {account['name']} 的Cookie文件 {account['cookie_file']} 不存在") print(f"启动多线程爬虫,线程数: {THREAD_POOL_SIZE}") threading.Thread(target=start_scheduler, daemon=True).start() if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) 以上代码,不改变代码逻辑和采集频率的前提下,对代码结构进行优化,进行模块化设计,同时具备良好的阅读性、扩展性
05-30
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值