trust level="Full"(转)

本文介绍如何在SharePoint中为自定义WebPart设置特定的DLL权限,避免使用全权信任带来的安全风险,并通过修改配置文件实现精确权限控制。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在SharePoint里面,不仅有用户的权限,还有代码的权限。比如,我们在安装一个自定义的WebPart的时候,默认的情况下是不能操纵文件夹的,如果你看一些教你怎么做WebPart的文章的话,你会发现很多都会教你修改web.config文件,把它的trust的level改成Full。

<trust level="Full" originUrl="" />

简单的说,这是一句不负责任的句话,意思就是给所有dll以所有的权限,如果你只是在自己机器上做点小实验,那没问题。如果是在正式的项目中,就会造成比较大的安全隐患。

事实上,我们应该为我们的dll赋自定义的权限以杜绝安全隐患。

首先,我们要创建包含自定义权限的config文件。到以下目录C:/Program Files/Common Files/Microsoft Shared/web server extensions/12/CONFIG,找到文件wss_minimaltrust.config,这是微软定义的权限文件,默认用的就是他。拷贝一份,重命名为wss_custom_minimaltrust.config。
找到以下节点:
<CodeGroup
    class="UnionCodeGroup"
    version="1"
    PermissionSetName="SPRestricted">
    <IMembershipCondition
        class="UrlMembershipCondition"
        version="1"
        Url="$AppDirUrl$/*"
     />
</CodeGroup>

拷贝他,粘贴到他的前面,将PermissionSetName改为FullTrust,如果你的dll是放在bin目录下,名字叫ss.dll,那么你修改它的Url为"$AppDirUrl$/bin/ss.dll".保存他。

再回到SharePoint的web.config文件,找到trustLevel节点,添加新的节点为
<trustLevel name="WSS_Custom_Minimal" policyFile="C:/Program Files/Common Files/Microsoft Shared/Web Server Extensions/12/config/wss_custom_minimaltrust.config" />

这样我们就把新的权限配置文件加入到了我们的站点中。

在找到trust节点,修改为<trust level="WSS_Custom_Minimal" originUrl="" />

这样我们就启用了我们自定义的权限配置,其他的都没变,只是把ss.dll的权限提高了。

 
[root@master ~]# hadoop jar "/root/software/hadoop-3.1.3/share/hadoop/tools/lib/hadoop-streaming-3.1.3.jar" -file "/usr/bin/python3" "/root/csv_python_code/my_mapper_csv.py" -mapper "/root/csv_python_code/my_mapper_csv.py" -file /usr/bin/python3 "/root/csv_python_code/my_reducer_csv.py" -reducer "/root/csv_python_code/my_reducer_csv.py" -input /my_input_csv/* -output /my_output.csv 2025-06-11 18:58:58,183 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. packageJobJar: [/usr/bin/python3, /root/csv_python_code/my_mapper_csv.py, /usr/bin/python3, /root/csv_python_code/my_reducer_csv.py, /tmp/hadoop-unjar1612883193461920012/] [] /tmp/streamjob7378925739115730847.jar tmpDir=null 2025-06-11 18:58:59,167 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.30.110:8032 2025-06-11 18:58:59,389 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.30.110:8032 2025-06-11 18:58:59,779 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1744881937486_0004 2025-06-11 18:58:59,893 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,007 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,435 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,466 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,908 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:01,415 INFO mapred.FileInputFormat: Total input files to process : 1 2025-06-11 18:59:01,456 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:01,909 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:01,927 INFO mapreduce.JobSubmitter: number of splits:2 2025-06-11 18:59:02,066 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:02,114 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1744881937486_0004 2025-06-11 18:59:02,114 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-06-11 18:59:02,343 INFO conf.Configuration: resource-types.xml not found 2025-06-11 18:59:02,344 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2025-06-11 18:59:02,437 INFO impl.YarnClientImpl: Submitted application application_1744881937486_0004 2025-06-11 18:59:02,489 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1744881937486_0004/ 2025-06-11 18:59:02,491 INFO mapreduce.Job: Running job: job_1744881937486_0004 2025-06-11 18:59:09,693 INFO mapreduce.Job: Job job_1744881937486_0004 running in uber mode : false 2025-06-11 18:59:09,696 INFO mapreduce.Job: map 0% reduce 0% 2025-06-11 18:59:13,846 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_m_000000_0, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:538) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2025-06-11 18:59:14,874 INFO mapreduce.Job: map 50% reduce 0% 2025-06-11 18:59:18,949 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_m_000000_1, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:538) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2025-06-11 18:59:21,984 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_m_000000_2, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:538) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2025-06-11 18:59:24,034 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_r_000000_0, Status : FAILED [2025-06-11 18:59:22.157]Container [pid=72527,containerID=container_1744881937486_0004_01_000005] is running 537983488B beyond the 'VIRTUAL' memory limit. Current usage: 168.3 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1744881937486_0004_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 72527 72525 72527 72527 (bash) 0 1 116002816 303 /bin/bash -c /usr/local/jdk1.8.0_391/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1744881937486_0004/container_1744881937486_0004_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 192.168.30.110 45262 attempt_1744881937486_0004_r_000000_0 5 1>/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005/stdout 2>/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005/stderr |- 72539 72527 72527 72527 (java) 432 50 2676838400 42782 /usr/local/jdk1.8.0_391/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1744881937486_0004/container_1744881937486_0004_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 192.168.30.110 45262 attempt_1744881937486_0004_r_000000_0 5 [2025-06-11 18:59:22.166]Container killed on request. Exit code is 143 [2025-06-11 18:59:22.179]Container exited with a non-zero exit code 143. 2025-06-11 18:59:28,176 INFO mapreduce.Job: map 100% reduce 100% 2025-06-11 18:59:29,197 INFO mapreduce.Job: Job job_1744881937486_0004 failed with state FAILED due to: Task failed task_1744881937486_0004_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2025-06-11 18:59:29,306 INFO mapreduce.Job: Counters: 43 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=352126 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1771275 HDFS: Number of bytes written=0 HDFS: Number of read operations=3 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=4 Failed reduce tasks=1 Killed map tasks=1 Killed reduce tasks=1 Launched map tasks=6 Launched reduce tasks=1 Other local map tasks=4 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=16751 Total time spent by all reduces in occupied slots (ms)=7052 Total time spent by all map tasks (ms)=16751 Total time spent by all reduce tasks (ms)=7052 Total vcore-milliseconds taken by all map tasks=16751 Total vcore-milliseconds taken by all reduce tasks=7052 Total megabyte-milliseconds taken by all map tasks=17153024 Total megabyte-milliseconds taken by all reduce tasks=7221248 Map-Reduce Framework Map input records=10009 Map output records=10009 Map output bytes=110099 Map output materialized bytes=130123 Input split bytes=92 Combine input records=0 Spilled Records=10009 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=160 CPU time spent (ms)=1030 Physical memory (bytes) snapshot=289234944 Virtual memory (bytes) snapshot=2799288320 Total committed heap usage (bytes)=209715200 Peak Map Physical memory (bytes)=289234944 Peak Map Virtual memory (bytes)=2799288320 File Input Format Counters Bytes Read=1771183 2025-06-11 18:59:29,308 ERROR streaming.StreamJob: Job not successful! Streaming Command Failed! [root@master ~]#
最新发布
06-12
<think>我们正在解决HadoopStreaming任务失败的两个相关问题:1)PipeMapRed.waitOutputThreads():subprocessfailedwithcode12)容器内存超限问题。根据用户描述,本地测试正常,但集群提交时出现错误。错误代码1通常表示子进程非正常退出(如Python脚本异常)。容器内存超限则可能是内存设置不当或程序内存消耗过大。参考引用[1]提供了PipeMapRed错误的解决思路,但用户更关注的是内存超限和错误代码1的综合解决。我们将分步骤解决:步骤1:确保Python脚本的健壮性(避免非正常退出导致错误1)步骤2:调整HadoopStreaming任务的内存设置(解决容器内存超限)步骤3:其他可能的原因和调试方法详细解决方案如下:一、解决Python脚本异常(错误代码1)在本地测试正常不代表在集群运行正常,因为集群环境可能数据量更大或环境差异。建议:1.在Python脚本开头添加异常捕获,确保任何异常都不会导致进程退出码为1(非0退出码都会导致该错误)。例如,在mapper.py和reducer.py的主程序部分添加try-except块,捕获异常并打印错误信息后退出码为0(或非1的退出码)是不建议的,因为错误退出码是Hadoop判断任务失败的方式。正确做法:捕获异常并打印日志,然后退出(使用非0退出码表示失败),但这样仍会报错,所以重点应该是避免异常。2.检查Python脚本的依赖环境是否在集群所有节点都存在。3.在集群上单独运行mapper.py和reducer.py脚本进行测试,比如用少量数据通过hadoopstreaming命令测试:```hadoopjar$HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar\-inputinput_path\-outputoutput_path\-mapper"pythonmapper.py"\-reducer"pythonreducer.py"\-filemapper.py\-filereducer.py```4.在Python脚本中增加日志输出,以便在Hadoop任务日志中查看具体错误。注意:Hadoop会捕获标准输出,所以日志应该输出到标准错误(sys.stderr)或使用日志模块输出到stderr。例如:```pythonimportsysimporttracebacktry:#主处理逻辑forlineinsys.stdin:#处理print(...)#输出到stdout,作为下一个阶段的输入exceptExceptionase:#打印错误信息到stderr,这样可以在任务日志中看到sys.stderr.write(traceback.format_exc())#可以选择重新抛出异常(这样退出码为1)或者手动退出sys.exit(1)```二、容器内存超限在集群运行,当数据量大时,内存消耗可能超过默认的容器内存限制。可以调整以下参数:1.通过HadoopStreaming命令行参数设置内存:```-Dmapreduce.map.memory.mb=4096\#设置Map任务容器内存为4GB-Dmapreduce.reduce.memory.mb=8192\#设置Reduce任务容器内存为8GB-Dmapreduce.map.java.opts=-Xmx3072m\#设置Map任务JVM堆内存最大3GB(留一些给非堆)-Dmapreduce.reduce.java.opts=-Xmx6144m#设置Reduce任务JVM堆内存最大6GB```注意:非堆内存(如Native代码)也会占用容器内存,所以`-Xmx`值应小于容器内存。2.如果内存超限是由于程序本身需要优化,可以考虑:-在mapper/reducer中避免加载大对象(如字典)到内存,考虑使用外部存储(如HBase)或分批次处理。-如果reducer需要聚合大量键值对导致内存不足,可以增加reduce任务数量(通过`-Dmapreduce.job.reduces=10`)或者优化数据结构。三、综合调试步骤1.首先确保脚本在集群上能够处理小数据量(通过HadoopStreaming命令传入少量数据测试)。2.使用Hadoop提供的日志功能查看详细错误。任务失败后,可以在HadoopJobHistoryWebUI上查看日志,查找Exception信息。3.如果错误日志不够清晰,尝试在提交任务时打开更多日志输出:```-Dmapreduce.map.java.opts="-Dorg.apache.hadoop.mapreduce.v2.app.MRAppMaster.log.level=DEBUG"\```但这通常不是必须的,先关注容器的syslog日志。4.检查Hadoop集群的Container日志,这些日志会输出到YARN的NodeManager所在节点的日志目录,但通常通过WebUI查看更方便。四、其他可能原因1.输入数据中有无法处理的格式(如空行、非预期字段等),导致程序解析失败。在集群运行时数据量更大,可能出现本地测试未覆盖的脏数据。2.集群环境问题,比如Python路径不正确。可以在mapper.py和reducer.py的开头指定Python解释器,并确保其可执行:```python#!/usr/bin/envpython```同时在提交命令中指定非默认Python版本(如果需要):```-mapper"python3mapper.py"-reducer"python3reducer.py"```3.如果使用第三方Python库,需要确保集群所有节点都安装了这些库。否则,可以通过`-file`参数分发虚拟环境(打包成zip文件)并在脚本中激活,或者使用`-archives`参数分发预装环境的压缩包。五、容器内存超限的更多配置如果调整内存参数后仍出现超限,考虑是否内存泄漏,或者程序在特定数据上内存激增。可以适当增加内存设置,但不可超过集群限制。另外,检查是否设置了过小的堆内存(-Xmx)导致频繁GC甚至内存不足。总结:首先确保Python脚本的健壮性,捕获异常并输出详细日志,然后调整内存参数。如果还有问题,通过Hadoop任务日志定位具体原因。根据以上内容,生成相关问题:§§相关问题§§1.HadoopStreaming中如何捕获Python脚本的异常日志?2.如何调整Hadoop任务的容器内存和堆内存?3.HadoopStreaming任务失败后如何查看详细日志?4.在Hadoop集群上如何指定Python版本?5.如何为HadoopStreaming任务分发Python依赖包?</think> ### Hadoop Streaming任务失败解决方案:PipeMapRed错误与内存超限 #### 问题分析 1. **PipeMapRed.waitOutputThreads(): subprocess failed with code 1** 子进程(Mapper/Reducer)异常退出导致。常见原因: - Python脚本未捕获异常,导致非零退出码 - 依赖包缺失或环境不一致 - 输入数据格式错误 2. **容器内存超限问题** 任务分配内存超过YARN容器限制。常见原因: - `mapreduce.map.memory.mb`/`mapreduce.reduce.memory.mb` 配置不足 - Python脚本内存泄漏(如无限追加的列表) - 单次处理数据量过大 --- ### 解决方案分步指南 #### 第一步:修复子进程退出错误(Code 1) ```bash # 修改Python脚本添加全局异常捕获 import sys import traceback def main(): # 原处理逻辑 if __name__ == "__main__": try: main() except Exception as e: sys.stderr.write(f"ERROR: {str(e)}\n") sys.stderr.write(traceback.format_exc()) # 输出堆栈信息 sys.exit(1) # 必须返回非零退出码 ``` **关键检查项**: 1. 集群所有节点安装相同版本的Python和依赖包 ```bash # 提交任务时携带依赖 -archives hdfs:///path/to/python_env.zip#pyenv \ -mapper "pyenv/bin/python mapper.py" ``` 2. 验证输入数据格式(使用`-inputformat`明确指定格式) #### 第二步:解决容器内存超限 **调整内存参数(示例)**: ```bash hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar \ -D mapreduce.map.memory.mb=4096 \ # 容器内存 -D mapreduce.reduce.memory.mb=8196 \ -D mapreduce.map.java.opts="-Xmx3072m" \ # JVM堆内存 -D mapreduce.reduce.java.opts="-Xmx6144m" \ -input /input \ -output /output \ -mapper "python mapper.py" \ -reducer "python reducer.py" \ -file mapper.py \ -file reducer.py ``` **内存优化技巧**: 1. **流式处理改造**:避免在Python中缓存数据 ```python # 错误做法(内存爆炸): all_data = [line for line in sys.stdin] # 正确做法(流式): for line in sys.stdin: process(line) ``` 2. **使用迭代器工具**:如`itertools.groupby`替代全量分组 3. **释放无用引用**:循环内使用`del`显式释放大对象 #### 第三步:高级调试方法 1. **日志诊断**: ```bash # 获取失败容器的完整日志 yarn logs -applicationId <app_id> > debug.log # 检查关键字: grep -A 10 "Caused by" debug.log grep "Container killed" debug.log ``` 2. **堆内存分析**: ```python # 在Python脚本中添加内存监控 import resource sys.stderr.write(f"MEM USAGE: {resource.getrusage(resource.RUSAGE_SELF).ru_maxrss} KB\n") ``` --- ### 预防措施 1. **本地模拟集群环境**: ```bash # 使用容器内存限制测试 ulimit -v $((4096*1024)) && cat input | python mapper.py ``` 2. **Hadoop参数优化矩阵**: | 场景 | 推荐配置参数 | |----------------------|----------------------------------------| | 大Key分组 | `-D stream.num.reduce.output.key=10000` | | 第三方库依赖 | `-archives` + 虚拟环境打包 | | 输入格式异常 | `-D mapreduce.input.fileinputformat.split.maxsize=67108864` | ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值