HDFS客户端源码

本文深入剖析HDFS中DFSClient的构造过程及其如何通过RPC与NameNode通信,获取文件块位置信息,以及如何处理输入流读取数据的详细流程。同时,概述了DFSClient在目录树操作中的角色和方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


-text会最终进入 Display类的 getInputStream方法:

protected void processPath(PathData item) throws IOException {
   if (item.stat.isDirectory()) {
     throw new PathIsDirectoryException(item.toString());
   }
   
   item.fs.setVerifyChecksum(verifyChecksum);
   printToStdout(getInputStream(item));//打印输出流
 }
 ...
 
// 以下是输出流    
protected InputStream getInputStream(PathData item) throws IOException {
      FSDataInputStream i = (FSDataInputStream)super.getInputStream(item);

      // Handle 0 and 1-byte files
      short leadBytes;
      try {
        leadBytes = i.readShort();
      } catch (EOFException e) {
        i.seek(0);
        return i;
      }

      // Check type of stream first
      switch(leadBytes) {
        case 0x1f8b: { // RFC 1952
          // Must be gzip
          i.seek(0);
          return new GZIPInputStream(i);
        }
        case 0x5345: { // 'S' 'E'
          // Might be a SequenceFile
          if (i.readByte() == 'Q') {
            i.close();
            return new TextRecordInputStream(item.stat);
          }
        }
        default: {
          // Check the type of compression instead, depending on Codec class's
          // own detection methods, based on the provided path.
          CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
          CompressionCodec codec = cf.getCodec(item.path);
          if (codec != null) {
            i.seek(0);
            return codec.createInputStream(i);
          }
          break;
        }
        case 0x4f62: { // 'O' 'b'
          if (i.readByte() == 'j') {
            i.close();
            return new AvroFileInputStream(item.stat);
          }
          break;
        }
      }

      // File is non-compressed, or not a file container we know.
      i.seek(0);
      return i;
    }

Display.getInputStream方法会调用DFSClient.open方法:

    protected InputStream getInputStream(PathData item) throws IOException {
      return item.fs.open(item.path);
    }

上述的fs的值为DistributedFileSystem(其中dfs的属性为DFSClient):
在这里插入图片描述

(重要)进入DistributedFileSystem中open方法

public FSDataInputStream open(Path f, final int bufferSize)
      throws IOException {
    statistics.incrementReadOps(1);
    Path absF = fixRelativePart(f);
    return new FileSystemLinkResolver<FSDataInputStream>() {
      @Override
      public FSDataInputStream doCall(final Path p)
          throws IOException, UnresolvedLinkException {
        final DFSInputStream dfsis =
          dfs.open(getPathName(p), bufferSize, verifyChecksum);
        return dfs.createWrappedInputStream(dfsis);
      }
      @Override
      public FSDataInputStream next(final FileSystem fs, final Path p)
          throws IOException {
        return fs.open(p, bufferSize);
      }
    }.resolve(this, absF);
  }

其中有两个匿名方法 doCallnext

  public FSDataInputStream open(Path f, final int bufferSize)
      throws IOException {
    statistics.incrementReadOps(1);
    Path absF = fixRelativePart(f);
    return new FileSystemLinkResolver<FSDataInputStream>() {
      @Override
      public FSDataInputStream doCall(final Path p)
          throws IOException, UnresolvedLinkException {
        final DFSInputStream dfsis =
          dfs.open(getPathName(p), bufferSize, verifyChecksum);
        return dfs.createWrappedInputStream(dfsis);
      }
      @Override
      public FSDataInputStream next(final FileSystem fs, final Path p)
          throws IOException {
        return fs.open(p, bufferSize);
      }
    }.resolve(this, absF);
  }

doCall方法调用了 dfs.open()获得的流对象,而这个dfs 是fs所持有的一个对象,它持有了一个可以和namenode进行通信的clientProtocal对象。
dfsClient.open:

  public DFSInputStream open(String src, int buffersize, boolean verifyChecksum)
      throws IOException, UnresolvedLinkException {
    checkOpen();
    //    Get block info from namenode
    TraceScope scope = getPathTraceScope("newDFSInputStream", src);
    try {
      return new DFSInputStream(this, src, verifyChecksum);
    } finally {
      scope.close();
    }
  }

通过DFSInputStream的构造方法来构造一个对应文件的输入流。

然后我们来看看构造方法里面是什么:

  DFSInputStream(DFSClient dfsClient, String src, boolean verifyChecksum
                 ) throws IOException, UnresolvedLinkException {
    this.dfsClient = dfsClient;
    this.verifyChecksum = verifyChecksum;
    this.src = src;
    synchronized (infoLock) {
      this.cachingStrategy = dfsClient.getDefaultReadCachingStrategy();
    }
    openInfo();
  }

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
openInfo()

  void openInfo() throws IOException, UnresolvedLinkException {
    synchronized(infoLock) {
    
      // 获取文件对应的所有数据块的信息
      lastBlockBeingWrittenLength = fetchLocatedBlocksAndGetLastBlockLength();

	  // 初始化重试次数
      int retriesForLastBlockLength = dfsClient.getConf().retryTimesForGetLastBlockLength;

      // 如果出现无法获取数据块长度的情况则重试
      while (retriesForLastBlockLength > 0) {
        // Getting last block length as -1 is a special case. When cluster
        // restarts, DNs may not report immediately. At this time partial block
        // locations will not be available with NN for getting the length. Lets
        // retry for 3 times to get the length.
        if (lastBlockBeingWrittenLength == -1) {
          DFSClient.LOG.warn("Last block locations not available. "
              + "Datanodes might not have reported blocks completely."
              + " Will retry for " + retriesForLastBlockLength + " times");
          waitFor(dfsClient.getConf().retryIntervalForGetLastBlockLength);
          lastBlockBeingWrittenLength = fetchLocatedBlocksAndGetLastBlockLength();
        } else {
          break;
        }
        retriesForLastBlockLength--;
      }
      if (retriesForLastBlockLength == 0) {
        throw new IOException("Could not obtain the last block locations.");
      }
    }
  }

如上所示,openInfo 会调用 fetchLocatedBlocksAndGetLastBlockLength 获取文件对应的所有数据块的位置信息。下边我们看一下 fetchLocatedBlocksAndGetLastBlockLength的方法的实现,他的执行逻辑可以分为以下几个步骤:
在这里插入图片描述

  private long fetchLocatedBlocksAndGetLastBlockLength() throws IOException {

    // 通过ClientProtocol获取文件对应的所有数据块位置信息
    final LocatedBlocks newInfo = dfsClient.getLocatedBlocks(src, 0);
    if (DFSClient.LOG.isDebugEnabled()) {
      DFSClient.LOG.debug("newInfo = " + newInfo);
    }
    if (newInfo == null) {
      throw new IOException("Cannot open filename " + src);
    }

    // 比较DFSClient.locatedBlocks属性及新获取位置的信息
    if (locatedBlocks != null) {
      Iterator<LocatedBlock> oldIter = locatedBlocks.getLocatedBlocks().iterator();
      Iterator<LocatedBlock> newIter = newInfo.getLocatedBlocks().iterator();
      while (oldIter.hasNext() && newIter.hasNext()) {
        
        // 如果数据块位置信息不匹配则抛出异常
        if (! oldIter.next().getBlock().equals(newIter.next().getBlock())) {
          throw new IOException("Blocklist for " + src + " has changed!");
        }
      }
    }
    // 更新locatedBlocks字段
    locatedBlocks = newInfo;
    long lastBlockBeingWrittenLength = 0;
    if (!locatedBlocks.isLastBlockComplete()) {
      final LocatedBlock last = locatedBlocks.getLastLocatedBlock();
      if (last != null) {
        if (last.getLocations().length == 0) {
          if (last.getBlockSize() == 0) {
            // if the length is zero, then no data has been written to
            // datanode. So no need to wait for the locations.
            // 如果最后一个数据块的长度为0则不用个更新直接返回0
            return 0;
          }
          return -1;
        }
        // 通过ClientDatanodeProtocol获取数据块在Datanode上的长度
        final long len = readBlockLength(last);
        // 更新DFSClient.locatedBlocks保存的最后一个数据块的长度
        last.getBlock().setNumBytes(len);
        lastBlockBeingWrittenLength = len; 
      }
    }

上述方法中,调用了以下两个方法:

  • dfsClient.getLocatedBlocks :通过 ClientProtocolNN通信
  • readBlockLength:通过ClientDatanodeProtocolDN通信

(1) dfsClient.getLocatedBlocks

  public LocatedBlocks getLocatedBlocks(String src, long start)
      throws IOException {
    return getLocatedBlocks(src, start, dfsClientConf.prefetchSize);
  }

在这里插入图片描述在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

  public LocatedBlocks getLocatedBlocks(String src, long start, long length)
      throws IOException {
    TraceScope scope = getPathTraceScope("getBlockLocations", src);
    try {
      return callGetBlockLocations(namenode, src, start, length);
    } finally {
      scope.close();
    }
  }
  static LocatedBlocks callGetBlockLocations(ClientProtocol namenode,
      String src, long start, long length) 
      throws IOException {
    try {
      return namenode.getBlockLocations(src, start, length);
    } catch(RemoteException re) {
      throw re.unwrapRemoteException(AccessControlException.class,
                                     FileNotFoundException.class,
                                     UnresolvedPathException.class);
    }
  }

在这里插入图片描述
进入ClientProtocol中的getBlockLocations方法:
在这里插入图片描述
(2) readBlockLength
在这里插入图片描述
在这里插入图片描述

DFSClient

对目录树的操作,响应地调用mkdir、delete等等,利用rpc调用nn的对用想法,相比输入输出流,稍简单一点。

构造器

构造器:主要任务有两个:

  • 读入配置初始化成员变量
  • 建立NN的IPC链接
/** 
   * Create a new DFSClient connected to the given nameNodeUri or rpcNamenode.
   * If HA is enabled and a positive value is set for 
   * {@link DFSConfigKeys#DFS_CLIENT_TEST_DROP_NAMENODE_RESPONSE_NUM_KEY} in the
   * configuration, the DFSClient will use {@link LossyRetryInvocationHandler}
   * as its RetryInvocationHandler. Otherwise one of nameNodeUri or rpcNamenode 
   * must be null.
   */
  @VisibleForTesting
  public DFSClient(URI nameNodeUri, ClientProtocol rpcNamenode,
      Configuration conf, FileSystem.Statistics stats)
    throws IOException {
    SpanReceiverHost.get(conf, DFSConfigKeys.DFS_CLIENT_HTRACE_PREFIX);
    traceSampler = new SamplerBuilder(TraceUtils.
        wrapHadoopConf(DFSConfigKeys.DFS_CLIENT_HTRACE_PREFIX, conf)).build();
    // Copy only the required DFSClient configuration
    this.dfsClientConf = new Conf(conf);
    if (this.dfsClientConf.useLegacyBlockReaderLocal) {
      LOG.debug("Using legacy short-circuit local reads.");
    }
    this.conf = conf;
    this.stats = stats;
    this.socketFactory = NetUtils.getSocketFactory(conf, ClientProtocol.class);
    this.dtpReplaceDatanodeOnFailure = ReplaceDatanodeOnFailure.get(conf);

    this.ugi = UserGroupInformation.getCurrentUser();
    
    this.authority = nameNodeUri == null? "null": nameNodeUri.getAuthority();
    this.clientName = "DFSClient_" + dfsClientConf.taskId + "_" + 
        DFSUtil.getRandom().nextInt()  + "_" + Thread.currentThread().getId();
    int numResponseToDrop = conf.getInt(
        DFSConfigKeys.DFS_CLIENT_TEST_DROP_NAMENODE_RESPONSE_NUM_KEY,
        DFSConfigKeys.DFS_CLIENT_TEST_DROP_NAMENODE_RESPONSE_NUM_DEFAULT);
    NameNodeProxies.ProxyAndInfo<ClientProtocol> proxyInfo = null;
    AtomicBoolean nnFallbackToSimpleAuth = new AtomicBoolean(false);
    if (numResponseToDrop > 0) {
      // This case is used for testing.
      LOG.warn(DFSConfigKeys.DFS_CLIENT_TEST_DROP_NAMENODE_RESPONSE_NUM_KEY
          + " is set to " + numResponseToDrop
          + ", this hacked client will proactively drop responses");
      proxyInfo = NameNodeProxies.createProxyWithLossyRetryHandler(conf,
          nameNodeUri, ClientProtocol.class, numResponseToDrop,
          nnFallbackToSimpleAuth);
    }
    
    if (proxyInfo != null) {
      this.dtService = proxyInfo.getDelegationTokenService();
      this.namenode = proxyInfo.getProxy();
    } else if (rpcNamenode != null) {
      // This case is used for testing.
      Preconditions.checkArgument(nameNodeUri == null);
      this.namenode = rpcNamenode;
      dtService = null;
    } else {
      Preconditions.checkArgument(nameNodeUri != null,
          "null URI");
      proxyInfo = NameNodeProxies.createProxy(conf, nameNodeUri,
          ClientProtocol.class, nnFallbackToSimpleAuth);
      this.dtService = proxyInfo.getDelegationTokenService();
      this.namenode = proxyInfo.getProxy();
    }

    String localInterfaces[] =
      conf.getTrimmedStrings(DFSConfigKeys.DFS_CLIENT_LOCAL_INTERFACES);
    localInterfaceAddrs = getLocalInterfaceAddrs(localInterfaces);
    if (LOG.isDebugEnabled() && 0 != localInterfaces.length) {
      LOG.debug("Using local interfaces [" +
      Joiner.on(',').join(localInterfaces)+ "] with addresses [" +
      Joiner.on(',').join(localInterfaceAddrs) + "]");
    }
    
    Boolean readDropBehind = (conf.get(DFS_CLIENT_CACHE_DROP_BEHIND_READS) == null) ?
        null : conf.getBoolean(DFS_CLIENT_CACHE_DROP_BEHIND_READS, false);
    Long readahead = (conf.get(DFS_CLIENT_CACHE_READAHEAD) == null) ?
        null : conf.getLong(DFS_CLIENT_CACHE_READAHEAD, 0);
    Boolean writeDropBehind = (conf.get(DFS_CLIENT_CACHE_DROP_BEHIND_WRITES) == null) ?
        null : conf.getBoolean(DFS_CLIENT_CACHE_DROP_BEHIND_WRITES, false);
    this.defaultReadCachingStrategy =
        new CachingStrategy(readDropBehind, readahead);
    this.defaultWriteCachingStrategy =
        new CachingStrategy(writeDropBehind, readahead);
    this.clientContext = ClientContext.get(
        conf.get(DFS_CLIENT_CONTEXT, DFS_CLIENT_CONTEXT_DEFAULT),
        dfsClientConf);
    this.hedgedReadThresholdMillis = conf.getLong(
        DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS,
        DFSConfigKeys.DEFAULT_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS);
    int numThreads = conf.getInt(
        DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE,
        DFSConfigKeys.DEFAULT_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE);
    if (numThreads > 0) {
      this.initThreadsNumForHedgedReads(numThreads);
    }
    this.saslClient = new SaslDataTransferClient(
      conf, DataTransferSaslUtil.getSaslPropertiesResolver(conf),
      TrustedChannelResolver.getInstance(conf), nnFallbackToSimpleAuth);
  }

导入了配置信息和配置了网络相关的参数。建立与NN的IPC链接。

 if (proxyInfo != null) {
      this.dtService = proxyInfo.getDelegationTokenService();
      this.namenode = proxyInfo.getProxy();
    } else if (rpcNamenode != null) {
      // This case is used for testing.
      Preconditions.checkArgument(nameNodeUri == null);
      this.namenode = rpcNamenode;
      dtService = null;
    } else {
      Preconditions.checkArgument(nameNodeUri != null,
          "null URI");
      proxyInfo = NameNodeProxies.createProxy(conf, nameNodeUri,
          ClientProtocol.class, nnFallbackToSimpleAuth);
      this.dtService = proxyInfo.getDelegationTokenService();
      this.namenode = proxyInfo.getProxy();
    }

DFSClient构造器的成功,表示可以使用该对象对HDFS进行各种操作。
DFSClinet.close会关闭客户端。

文件和目录

不需要与DN打交道。
DFSClient.java中,也有典型的目录树操作方法,如mkdirs()delete()getFileInfo()setOnwer等。如下:

  void checkOpen() throws IOException {
    if (!clientRunning) {
      IOException result = new IOException("Filesystem closed");
      throw result;
    }
  }

  // DFSClient 中的delete,直接使用RPC调用NN的delete
  public boolean delete(String src, boolean recursive) throws IOException {
    checkOpen();
    TraceScope scope = getPathTraceScope("delete", src);
    try {
      return namenode.delete(src, recursive);
    } catch(RemoteException re) {
      throw re.unwrapRemoteException(AccessControlException.class,
                                     FileNotFoundException.class,
                                     SafeModeException.class,
                                     UnresolvedPathException.class,
                                     SnapshotAccessControlException.class);
    } finally {
      scope.close();
    }
  }
  ...
  
  // DFSClient中的 getFileInfo ,也是直接调用NN的getFileInfo
  public HdfsFileStatus getFileInfo(String src) throws IOException {
    checkOpen();
    TraceScope scope = getPathTraceScope("getFileInfo", src);
    try {
      return namenode.getFileInfo(src);
    } catch(RemoteException re) {
      throw re.unwrapRemoteException(AccessControlException.class,
                                     FileNotFoundException.class,
                                     UnresolvedPathException.class);
    } finally {
      scope.close();
    }
  }

mkdirs在获得必要参数masked后,同样调用的RPC

  // DFSClient中的mkdir
  public boolean mkdirs(String src, FsPermission permission,
      boolean createParent) throws IOException {
    if (permission == null) {
      permission = FsPermission.getDefault();
    }
    FsPermission masked = permission.applyUMask(dfsClientConf.uMask);
    return primitiveMkdir(src, masked, createParent);
  }
 ...

  public boolean primitiveMkdir(String src, FsPermission absPermission, 
    boolean createParent)
    throws IOException {
    checkOpen();
    if (absPermission == null) {
      absPermission = 
        FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
    } 

    if(LOG.isDebugEnabled()) {
      LOG.debug(src + ": masked=" + absPermission);
    }
    TraceScope scope = Trace.startSpan("mkdir", traceSampler);
    try {
      return namenode.mkdirs(src, absPermission, createParent);
    } catch(RemoteException re) {
      throw re.unwrapRemoteException(AccessControlException.class,
                                     InvalidPathException.class,
                                     FileAlreadyExistsException.class,
                                     FileNotFoundException.class,
                                     ParentNotDirectoryException.class,
                                     SafeModeException.class,
                                     NSQuotaExceededException.class,
                                     DSQuotaExceededException.class,
                                     UnresolvedPathException.class,
                                     SnapshotAccessControlException.class);
    } finally {
      scope.close();
    }
  }

小结:

  1. DFSClient中操作目录树/文件的方法都是先checkopen检查DFSClient的状态,然后与NN建立RPC链接,调用NN对应方法。

读与输入流

输入输出流是DFSClinet中最复杂的部分,不仅需要与NN通信,还需要访问DN。相比之下,输入流比输出流简单。
读数据的过程中,NN提供了两个远程方法:

  • getBlockLocations()
  • reportBadBlocks()
第1章 HDFS 1 1.1 HDFS概述 1 1.1.1 HDFS体系结构 1 1.1.2 HDFS基本概念 2 1.2 HDFS通信协议 4 1.2.1 Hadoop RPC接口 4 1.2.2 流式接口 20 1.3 HDFS主要流程 22 1.3.1 HDFS客户端读流程 22 1.3.2 HDFS客户端写流程 24 1.3.3 HDFS客户端追加写流程 25 1.3.4 Datanode启动、心跳以及执行名字节点指令流程 26 1.3.5 HA切换流程 27 第2章 Hadoop RPC 29 2.1 概述 29 2.1.1 RPC框架概述 29 2.1.2 Hadoop RPC框架概述 30 2.2 Hadoop RPC的使用 36 2.2.1 Hadoop RPC使用概述 36 2.2.2 定义RPC协议 40 2.2.3 客户端获取Proxy对象 45 2.2.4 服务器获取Server对象 54 2.3 Hadoop RPC实现 63 2.3.1 RPC类实现 63 2.3.2 Client类实现 64 2.3.3 Server类实现 76 第3章 Namenode(名字节点) 88 3.1 文件系统树 88 3.1.1 INode相关类 89 3.1.2 Feature相关类 102 3.1.3 FSEditLog类 117 3.1.4 FSImage类 138 3.1.5 FSDirectory类 158 3.2 数据块管理 162 3.2.1 Block、Replica、BlocksMap 162 3.2.2 数据块副本状态 167 3.2.3 BlockManager类(done) 177 3.3 数据节点管理 211 3.3.1 DatanodeDescriptor 212 3.3.2 DatanodeStorageInfo 214 3.3.3 DatanodeManager 217 3.4 租约管理 233 3.4.1 LeaseManager.Lease 233 3.4.2 LeaseManager 234 3.5 缓存管理 246 3.5.1 缓存概念 247 3.5.2 缓存管理命令 247 3.5.3 HDFS集中式缓存架构 247 3.5.4 CacheManager类实现 248 3.5.5 CacheReplicationMonitor 250 3.6 ClientProtocol实现 251 3.6.1 创建文件 251 3.6.2 追加写文件 254 3.6.3 创建新的数据块 257 3.6.4 放弃数据块 265 3.6.5 关闭文件 266 3.7 Namenode的启动和停止 268 3.7.1 安全模式 268 3.7.2 HDFS High Availability 276 3.7.3 名字节点的启动 301 3.7.4 名字节点的停止 306 第4章 Datanode(数据节点) 307 4.1 Datanode逻辑结构 307 4.1.1 HDFS 1.X架构 307 4.1.2 HDFS Federation 308 4.1.3 Datanode逻辑结构 310 4.2 Datanode存储 312 4.2.1 Datanode升级机制 312 4.2.2 Datanode磁盘存储结构 315 4.2.3 DataStorage实现 317 4.3 文件系统数据集 334 4.3.1 Datanode上数据块副本的状态 335 4.3.2 BlockPoolSlice实现 335 4.3.3 FsVolumeImpl实现 342 4.3.4 FsVolumeList实现 345 4.3.5 FsDatasetImpl实现 348 4.4 BlockPoolManager 375 4.4.1 BPServiceActor实现 376 4.4.2 BPOfferService实现 389 4.4.3 BlockPoolManager实现 396 4.5 流式接口 398 4.5.1 DataTransferProtocol定义 398 4.5.2 Sender和Receiver 399 4.5.3 DataXceiverServer 403 4.5.4 DataXceiver 406 4.5.5 读数据 408 4.5.6 写数据(done) 423 4.5.7 数据块替换、数据块拷贝和读数据块校验 437 4.5.8 短路读操作 437 4.6 数据块扫描器 437 4.6.1 DataBlockScanner实现 438 4.6.2 BlockPoolSliceScanner实现 439 4.7 DirectoryScanner 442 4.8 DataNode类的实现 443 4.8.1 DataNode的启动 444 4.8.2 DataNode的关闭 446 第5章 HDFS客户端 447 5.1 DFSClient实现 447 5.1.1 构造方法 448 5.1.2 关闭方法 449 5.1.3 文件系统管理与配置方法 450 5.1.4 HDFS文件与操作方法 451 5.1.5 HDFS文件读写方法 452 5.2 文件读操作与输入流 452 5.2.1 打开文件 452 5.2.2 读操作――DFSInputStream实现 461 5.3 文件短路读操作 481 5.3.1 短路读共享内存 482 5.3.2 DataTransferProtocol 484 5.3.3 DFSClient短路读操作流程 488 5.3.4 Datanode短路读操作流程 509 5.4 文件写操作与输出流 512 5.4.1 创建文件 512 5.4.2 写操作――DFSOutputStream实现 516 5.4.3 追加写操作 543 5.4.4 租约相关 546 5.4.5 关闭输出流 548 5.5 HDFS常用工具 549 5.5.1 FsShell实现 550 5.5.2 DFSAdmin实现 552
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值