NN_Limitation

本文探讨了AI/机器学习的局限性,包括对大量训练数据的依赖、缺乏可解释性、泛化能力差、易受欺骗、上下文理解不足和抽象问题,提醒我们避免过度炒作并持续改进技术。

 Machine Learning | ShareTechnote

AI/Machine Learning 的局限性

在我开始讨论 AI/机器学习的局限性之前,我想先说明一件事。本页面的目的不是展示我个人对这个技术的怀疑态度,也不是让读者对追求这个技术领域感到失望或气馁。我个人(并且将来也会)是 AI/机器学习的坚定追随者,原因有很多。

关于 AI/机器学习的局限性,人们常提到以下几点(截至2020年1月):我不能说所有这些点都会从列表中消失,但至少其中一些在未来将不再被视为限制,正如我们在工程学的历史和科学中所看到的那样。

目前的 AI/机器学习确实可以自学,但它需要大量的训练数据。一般来说,人类大脑可以从更小的训练数据集(或例子)中学习。有许多情况下,很难(甚至几乎不可能)获得足够大的数据集来训练一个机器学习算法。
AI/机器学习算法使用特定的训练数据集进行训练后,并不总是能处理在训练阶段从未遇到过的数据,即使新数据本质上属于同一类别。有时,基于图像的机器学习算法无法识别与训练图像相似但来自不同相机(例如,不同的亮度等)和分辨率的图像。
我知道我知道它是什么,但我不知道为什么知道它。在这种情况下,“它”指的是“AI/机器学习”。这通常被称为“可解释性”。从各种示例和实际应用中,我们知道(看到)某些 AI/机器学习可以学习一些东西并了解一些东西(例如,分类图像),但我们不完全清楚它是如何学习的。你可能会说:“我们知道它是如何学习的。它通过称为反向传播的机制不断更新权重值来学习。”但是当我们在这里谈论“可解释性”时,它意味着更具体的东西...比如一些确定性的/显式的逻辑。这真的有关系吗?人类大脑中有很多我们不知道其工作原理的事情。为什么我们期望 AI/机器学习具有这种可解释性呢?这个问题的答案留给你自己去思考。
难以泛化学习。在大多数(可能每个) AI/机器学习算法的情况下,它们专门用于执行特定功能,而且很难将这些学习泛化/扩展到其他领域。例如,即使是最好的图像分类器也不能完成驾驶汽车的基本任务。在一个特定类型的游戏中击败所有最佳人类玩家的算法在玩另一种类型的游戏时不会比第一次玩的玩家表现得更好。
存在一种系统性地欺骗算法的方法。这种方法最初是在那些用于图像分类的算法(即基于CNN的算法)中发现的。人们发现,对图像进行特殊设计的微小扰动可以使算法得出完全错误的结果,而相同的扰动对人类大脑没有任何影响。你可能会听说过一个著名的例子:一张带有这种类型扰动的寺庙照片被图像分类算法误认为是鸵鸟。你可以在此处找到更多详细信息和更多此类示例。现在似乎不仅图像分类算法存在这个问题,而且在自然语言处理等领域也存在类似的问题,正如这里所提到的那样。
对于自然语言处理相关的 AI/机器学习来说,理解上下文的能力非常有限(或者根本没有)。由于我们语言的本质,大多数语言(据我所知的所有自然语言)对许多表达方式都存在歧义。在大多数情况下,我们人类至少在一定程度上解决了这种歧义问题,这取决于上下文,但 AI/机器学习没有(或只有非常有限的)理解上下文的能力。一个简单的测试是尝试在谷歌翻译中将“Can I cut in ?”翻译成另一种语言。你会知道它的翻译会根据上下文而有所不同...比如你一只手拿着刀试图切东西...或者你在一条长队前面试图插队到队伍中间。
更多的抽象问题,如机器偏见、决策制定等关键情况,如自动驾驶中的 AI/机器学习应用(这里)。我个人不确定这是否是 AI/机器学习的局限性。这可能是人类大脑和/或人类社会的限制,而且我们不知道人类大脑/人类社会中的确切解决方案...而且我们可能无法在未来找到任何明确的解决方案在人类大脑/社会中。那么为什么我们要关心呢?我留给你自己去思考。我个人认为值得思考、重新思考并尝试找到更好的步骤、更好的调整方法,即使我们不能一次性解决所有这些问题(或单一的 AI/机器学习算法)。

为什么我们要关心这些局限性?不同的人对此问题有不同的答案。阅读了许多文章并观看了 YouTube 上的讲座和研讨会后,以下是我所理解的一些原因(这只是基于我从这些文档和视频中的解读,并不能保证我的解读是正确的)。
警告过度炒作:有时候建议要清楚地了解当前 AI/机器学习的局限性,以免陷入过度炒作之中。正如我们在科学和工程学的历史中所看到的那样,一定程度的炒作对于激励行业中的每个利益相关者实现这项技术是必要的

將下列python程式轉成matlab程式 # FTANet import torch import torch.nn as nn import torch.nn.functional as F class SF_Module(nn.Module): def __init__(self, input_num, n_channel, reduction, limitation): super(SF_Module, self).__init__() # Fuse Layer self.f_avg = nn.AdaptiveAvgPool2d((1,1)) self.f_bn = nn.BatchNorm1d(n_channel) self.f_linear = nn.Sequential( nn.Linear(n_channel, max(n_channel // reduction, limitation)), nn.SELU() ) # Select Layer self.s_linear = nn.ModuleList([ nn.Linear(max(n_channel // reduction, limitation), n_channel) for _ in range(input_num) ]) def forward(self, x): # x [3, bs, c, h, w] fused = None for x_s in x: if fused is None: fused = x_s else: fused = fused + x_s # [bs, c, h, w] fused = self.f_avg(fused) # bs,c,1,1 fused = fused.view(fused.shape[0], fused.shape[1]) fused = self.f_bn(fused) fused = self.f_linear(fused) masks = [] for i in range(len(x)): masks.append(self.s_linear[i](fused)) # [3, bs, c] mask_stack = torch.stack(masks, dim = -1) # bs, c, 3 mask_stack = nn.Softmax(dim = -2)(mask_stack) selected = None for i, x_s in enumerate(x): mask = mask_stack[:, :, i][:,:, None, None] # bs,c,1,1 x_s = x_s * mask if selected is None: selected = x_s else: selected = selected + x_s # [bs, c, h,w] return selected class FTA_Module(nn.Module): def __init__(self, shape, kt, kf): super(FTA_Module, self).__init__() self.bn = nn.BatchNorm2d(shape[2]) self.r_cn = nn.Sequential( nn.Conv2d(shape[2], shape[3], (1,1)), nn.ReLU() ) self.ta_cn1 = nn.Sequential( nn.Conv1d(shape[2], shape[3], kt, padding=(kt - 1) // 2), nn.SELU() ) self.ta_cn2 = nn.Sequential( nn.Conv1d(shape[3], shape[3], kt, padding=(kt - 1) // 2), nn.SELU() ) self.ta_cn3 = nn.Sequential( nn.Conv2d(shape[2], shape[3], 3, padding=1), nn.SELU() ) self.ta_cn4 = nn.Sequential( nn.Conv2d(shape[3], shape[3], 5, padding=2), nn.SELU() ) self.fa_cn1 = nn.Sequential( nn.Conv1d(shape[2], shape[3], kf, padding=(kf - 1) // 2), nn.SELU() ) self.fa_cn2 = nn.Sequential( nn.Conv1d(shape[3], shape[3], kf, padding=(kf - 1) // 2), nn.SELU() ) self.fa_cn3 = nn.Sequential( nn.Conv2d(shape[2], shape[3], 3, padding=1), nn.SELU() ) self.fa_cn4 = nn.Sequential( nn.Conv2d(shape[3], shape[3], 5, padding=2), nn.SELU() ) def forward(self, x): x = self.bn(x) x_r = self.r_cn(x) a_t = torch.mean(x, dim=-2) a_t = self.ta_cn1(a_t) a_t = self.ta_cn2(a_t) a_t = nn.Softmax(dim=-1)(a_t) a_t = a_t.unsqueeze(dim=-2) x_t = self.ta_cn3(x) x_t = self.ta_cn4(x_t) x_t = x_t * a_t a_f = torch.mean(x, dim=-1) a_f = self.fa_cn1(a_f) a_f = self.fa_cn2(a_f) a_f = nn.Softmax(dim=-1)(a_f) a_f = a_f.unsqueeze(dim=-1) x_f = self.fa_cn3(x) x_f = self.fa_cn4(x_f) x_f = x_f * a_f return x_r, x_t, x_f class FTAnet(nn.Module): def __init__(self, freq_bin = 360, time_segment = 128): super(FTAnet, self).__init__() self.bn_layer = nn.BatchNorm2d(3) # bm self.bm_layer = nn.Sequential( nn.Conv2d(3, 16, (4,1), stride=(4,1)), nn.SELU(), nn.Conv2d(16, 16, (4,1), stride=(4,1)), nn.SELU(), nn.Conv2d(16, 16, (4,1), stride=(4,1)), nn.SELU(), nn.Conv2d(16, 1, (5,1), stride=(5,1)), nn.SELU() ) # fta_module self.fta_1 = FTA_Module((freq_bin, time_segment, 3, 32), 3, 3) self.fta_2 = FTA_Module((freq_bin // 2, time_segment // 2, 32, 64), 3, 3) self.fta_3 = FTA_Module((freq_bin // 4, time_segment // 4, 64, 128), 3, 3) self.fta_4 = FTA_Module((freq_bin // 4, time_segment // 4, 128, 128), 3, 3) self.fta_5 = FTA_Module((freq_bin // 2, time_segment // 2, 128, 64), 3, 3) self.fta_6 = FTA_Module((freq_bin, time_segment, 64, 32), 3, 3) self.fta_7 = FTA_Module((freq_bin, time_segment, 32, 1), 3, 3) # sf_module self.sf_1 = SF_Module(3, 32, 4, 4) self.sf_2 = SF_Module(3, 64, 4, 4) self.sf_3 = SF_Module(3, 128, 4, 4) self.sf_4 = SF_Module(3, 128, 4, 4) self.sf_5 = SF_Module(3, 64, 4, 4) self.sf_6 = SF_Module(3, 32, 4, 4) self.sf_7 = SF_Module(3, 1, 4, 4) # maxpool self.mp_1 = nn.MaxPool2d((2,2), (2,2)) self.mp_2 = nn.MaxPool2d((2,2), (2,2)) self.up_1 = nn.Upsample(scale_factor=2) self.up_2 = nn.Upsample(scale_factor=2) def forward(self, x): x = self.bn_layer(x) bm = x bm = self.bm_layer(bm) x_r, x_t, x_f = self.fta_1(x) x = self.sf_1([x_r, x_t, x_f]) x = self.mp_1(x) x_r, x_t, x_f = self.fta_2(x) x = self.sf_2([x_r, x_t, x_f]) x = self.mp_2(x) x_r, x_t, x_f = self.fta_3(x) x = self.sf_3([x_r, x_t, x_f]) x_r, x_t, x_f = self.fta_4(x) x = self.sf_4([x_r, x_t, x_f]) x = self.up_1(x) x_r, x_t, x_f = self.fta_5(x) x = self.sf_5([x_r, x_t, x_f]) x = self.up_2(x) x_r, x_t, x_f = self.fta_6(x) x = self.sf_6([x_r, x_t, x_f]) x_r, x_t, x_f = self.fta_7(x) x = self.sf_7([x_r, x_t, x_f]) output_pre = torch.cat([bm, x], dim = 2) output = nn.Softmax(dim=-2)(output_pre) return output, output_pre
08-25
参考我给你的图像上传代码和下面的图像识别并在oled上显示的代码,将他们整合在一起/* Edge Impulse Arduino examples * Copyright (c) 2022 EdgeImpulse Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ // These sketches are tested with 2.0.4 ESP32 Arduino Core // https://github.com/espressif/arduino-esp32/releases/tag/2.0.4 /* Includes ---------------------------------------------------------------- */ #include <shibie_inferencing.h> #include "edge-impulse-sdk/dsp/image/image.hpp" #include "esp_camera.h" // Select camera model - find more camera models in camera_pins.h file here // https://github.com/espressif/arduino-esp32/blob/master/libraries/ESP32/examples/Camera/CameraWebServer/camera_pins.h //#define CAMERA_MODEL_ESP_EYE // Has PSRAM #define CAMERA_MODEL_AI_THINKER // Has PSRAM #if defined(CAMERA_MODEL_ESP_EYE) #define PWDN_GPIO_NUM -1 #define RESET_GPIO_NUM -1 #define XCLK_GPIO_NUM 4 #define SIOD_GPIO_NUM 18 #define SIOC_GPIO_NUM 23 #define Y9_GPIO_NUM 36 #define Y8_GPIO_NUM 37 #define Y7_GPIO_NUM 38 #define Y6_GPIO_NUM 39 #define Y5_GPIO_NUM 35 #define Y4_GPIO_NUM 14 #define Y3_GPIO_NUM 13 #define Y2_GPIO_NUM 34 #define VSYNC_GPIO_NUM 5 #define HREF_GPIO_NUM 27 #define PCLK_GPIO_NUM 25 #elif defined(CAMERA_MODEL_AI_THINKER) #define PWDN_GPIO_NUM 32 #define RESET_GPIO_NUM -1 #define XCLK_GPIO_NUM 0 #define SIOD_GPIO_NUM 26 #define SIOC_GPIO_NUM 27 #define Y9_GPIO_NUM 35 #define Y8_GPIO_NUM 34 #define Y7_GPIO_NUM 39 #define Y6_GPIO_NUM 36 #define Y5_GPIO_NUM 21 #define Y4_GPIO_NUM 19 #define Y3_GPIO_NUM 18 #define Y2_GPIO_NUM 5 #define VSYNC_GPIO_NUM 25 #define HREF_GPIO_NUM 23 #define PCLK_GPIO_NUM 22 #else #error "Camera model not selected" #endif /* Constant defines -------------------------------------------------------- */ #define EI_CAMERA_RAW_FRAME_BUFFER_COLS 320 #define EI_CAMERA_RAW_FRAME_BUFFER_ROWS 240 #define EI_CAMERA_FRAME_BYTE_SIZE 3 /* Private variables ------------------------------------------------------- */ static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal static bool is_initialised = false; uint8_t *snapshot_buf; //points to the output of the capture static camera_config_t camera_config = { .pin_pwdn = PWDN_GPIO_NUM, .pin_reset = RESET_GPIO_NUM, .pin_xclk = XCLK_GPIO_NUM, .pin_sscb_sda = SIOD_GPIO_NUM, .pin_sscb_scl = SIOC_GPIO_NUM, .pin_d7 = Y9_GPIO_NUM, .pin_d6 = Y8_GPIO_NUM, .pin_d5 = Y7_GPIO_NUM, .pin_d4 = Y6_GPIO_NUM, .pin_d3 = Y5_GPIO_NUM, .pin_d2 = Y4_GPIO_NUM, .pin_d1 = Y3_GPIO_NUM, .pin_d0 = Y2_GPIO_NUM, .pin_vsync = VSYNC_GPIO_NUM, .pin_href = HREF_GPIO_NUM, .pin_pclk = PCLK_GPIO_NUM, //XCLK 20MHz or 10MHz for OV2640 double FPS (Experimental) .xclk_freq_hz = 20000000, .ledc_timer = LEDC_TIMER_0, .ledc_channel = LEDC_CHANNEL_0, .pixel_format = PIXFORMAT_JPEG, //YUV422,GRAYSCALE,RGB565,JPEG .frame_size = FRAMESIZE_QVGA, //QQVGA-UXGA Do not use sizes above QVGA when not JPEG .jpeg_quality = 12, //0-63 lower number means higher quality .fb_count = 1, //if more than one, i2s runs in continuous mode. Use only with JPEG .fb_location = CAMERA_FB_IN_PSRAM, .grab_mode = CAMERA_GRAB_WHEN_EMPTY, }; /* Function definitions ------------------------------------------------------- */ bool ei_camera_init(void); void ei_camera_deinit(void); bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf) ; /** * @brief Arduino setup function */ void setup() { // put your setup code here, to run once: Serial.begin(115200); //comment out the below line to start inference immediately after upload while (!Serial); Serial.println("Edge Impulse Inferencing Demo"); if (ei_camera_init() == false) { ei_printf("Failed to initialize Camera!\r\n"); } else { ei_printf("Camera initialized\r\n"); } ei_printf("\nStarting continious inference in 2 seconds...\n"); ei_sleep(2000); } /** * @brief Get data and run inferencing * * @param[in] debug Get debug info if true */ void loop() { // instead of wait_ms, we'll wait on the signal, this allows threads to cancel us... if (ei_sleep(5) != EI_IMPULSE_OK) { return; } snapshot_buf = (uint8_t*)malloc(EI_CAMERA_RAW_FRAME_BUFFER_COLS * EI_CAMERA_RAW_FRAME_BUFFER_ROWS * EI_CAMERA_FRAME_BYTE_SIZE); // check if allocation was successful if(snapshot_buf == nullptr) { ei_printf("ERR: Failed to allocate snapshot buffer!\n"); return; } ei::signal_t signal; signal.total_length = EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT; signal.get_data = &ei_camera_get_data; if (ei_camera_capture((size_t)EI_CLASSIFIER_INPUT_WIDTH, (size_t)EI_CLASSIFIER_INPUT_HEIGHT, snapshot_buf) == false) { ei_printf("Failed to capture image\r\n"); free(snapshot_buf); return; } // Run the classifier ei_impulse_result_t result = { 0 }; EI_IMPULSE_ERROR err = run_classifier(&signal, &result, debug_nn); if (err != EI_IMPULSE_OK) { ei_printf("ERR: Failed to run classifier (%d)\n", err); return; } // print the predictions ei_printf("Predictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n", result.timing.dsp, result.timing.classification, result.timing.anomaly); #if EI_CLASSIFIER_OBJECT_DETECTION == 1 ei_printf("Object detection bounding boxes:\r\n"); for (uint32_t i = 0; i < result.bounding_boxes_count; i++) { ei_impulse_result_bounding_box_t bb = result.bounding_boxes[i]; if (bb.value == 0) { continue; } ei_printf(" %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\r\n", bb.label, bb.value, bb.x, bb.y, bb.width, bb.height); } // Print the prediction results (classification) #else ei_printf("Predictions:\r\n"); for (uint16_t i = 0; i < EI_CLASSIFIER_LABEL_COUNT; i++) { ei_printf(" %s: ", ei_classifier_inferencing_categories[i]); ei_printf("%.5f\r\n", result.classification[i].value); } #endif // Print anomaly result (if it exists) #if EI_CLASSIFIER_HAS_ANOMALY ei_printf("Anomaly prediction: %.3f\r\n", result.anomaly); #endif #if EI_CLASSIFIER_HAS_VISUAL_ANOMALY ei_printf("Visual anomalies:\r\n"); for (uint32_t i = 0; i < result.visual_ad_count; i++) { ei_impulse_result_bounding_box_t bb = result.visual_ad_grid_cells[i]; if (bb.value == 0) { continue; } ei_printf(" %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\r\n", bb.label, bb.value, bb.x, bb.y, bb.width, bb.height); } #endif free(snapshot_buf); } /** * @brief Setup image sensor & start streaming * * @retval false if initialisation failed */ bool ei_camera_init(void) { if (is_initialised) return true; #if defined(CAMERA_MODEL_ESP_EYE) pinMode(13, INPUT_PULLUP); pinMode(14, INPUT_PULLUP); #endif //initialize the camera esp_err_t err = esp_camera_init(&camera_config); if (err != ESP_OK) { Serial.printf("Camera init failed with error 0x%x\n", err); return false; } sensor_t * s = esp_camera_sensor_get(); // initial sensors are flipped vertically and colors are a bit saturated if (s->id.PID == OV3660_PID) { s->set_vflip(s, 1); // flip it back s->set_brightness(s, 1); // up the brightness just a bit s->set_saturation(s, 0); // lower the saturation } #if defined(CAMERA_MODEL_M5STACK_WIDE) s->set_vflip(s, 1); s->set_hmirror(s, 1); #elif defined(CAMERA_MODEL_ESP_EYE) s->set_vflip(s, 1); s->set_hmirror(s, 1); s->set_awb_gain(s, 1); #endif is_initialised = true; return true; } /** * @brief Stop streaming of sensor data */ void ei_camera_deinit(void) { //deinitialize the camera esp_err_t err = esp_camera_deinit(); if (err != ESP_OK) { ei_printf("Camera deinit failed\n"); return; } is_initialised = false; return; } /** * @brief Capture, rescale and crop image * * @param[in] img_width width of output image * @param[in] img_height height of output image * @param[in] out_buf pointer to store output image, NULL may be used * if ei_camera_frame_buffer is to be used for capture and resize/cropping. * * @retval false if not initialised, image captured, rescaled or cropped failed * */ bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf) { bool do_resize = false; if (!is_initialised) { ei_printf("ERR: Camera is not initialized\r\n"); return false; } camera_fb_t *fb = esp_camera_fb_get(); if (!fb) { ei_printf("Camera capture failed\n"); return false; } bool converted = fmt2rgb888(fb->buf, fb->len, PIXFORMAT_JPEG, snapshot_buf); esp_camera_fb_return(fb); if(!converted){ ei_printf("Conversion failed\n"); return false; } if ((img_width != EI_CAMERA_RAW_FRAME_BUFFER_COLS) || (img_height != EI_CAMERA_RAW_FRAME_BUFFER_ROWS)) { do_resize = true; } if (do_resize) { ei::image::processing::crop_and_interpolate_rgb888( out_buf, EI_CAMERA_RAW_FRAME_BUFFER_COLS, EI_CAMERA_RAW_FRAME_BUFFER_ROWS, out_buf, img_width, img_height); } return true; } static int ei_camera_get_data(size_t offset, size_t length, float *out_ptr) { // we already have a RGB888 buffer, so recalculate offset into pixel index size_t pixel_ix = offset * 3; size_t pixels_left = length; size_t out_ptr_ix = 0; while (pixels_left != 0) { // Swap BGR to RGB here // due to https://github.com/espressif/esp32-camera/issues/379 out_ptr[out_ptr_ix] = (snapshot_buf[pixel_ix + 2] << 16) + (snapshot_buf[pixel_ix + 1] << 8) + snapshot_buf[pixel_ix]; // go to the next pixel out_ptr_ix++; pixel_ix+=3; pixels_left--; } // and done! return 0; } #if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_CAMERA使用下面的头文件#include <WiFi.h> #include "esp_camera.h" #include <shibie_inferencing.h> #include "edge-impulse-sdk/dsp/image/image.hpp" #include "freertos/semphr.h" // 互斥锁头文件 #include "esp_task_wdt.h" #include "freertos/task.h" #include "esp_http_server.h" #error "Invalid model for current sensor" #endif
07-22
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值