ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint

本文介绍了解决Hadoop中SecondaryNameNode出现异常的方法。首先需要停止Hadoop服务,接着删除临时目录中的SecondaryNameNode数据文件(路径可通过hadoop.tmp.dir获取),最后重新启动Hadoop服务即可解决问题。
部署运行你感兴趣的模型镜像

http://stackoverflow.com/questions/21732988/error-org-apache-hadoop-hdfs-server-namenode-secondarynamenode-exception-in-doc

We need to stop the hadoop services first , and then delete the tmp secondary namenode directory (hadoop.tmp.dir will tell the path for secondary namenode data directory). After this, start the services again and the issue will be fixed.

您可能感兴趣的与本文相关的镜像

Python3.8

Python3.8

Conda
Python

Python 是一种高级、解释型、通用的编程语言,以其简洁易读的语法而闻名,适用于广泛的应用,包括Web开发、数据分析、人工智能和自动化脚本

25/10/24 00:40:34 WARN namenode.FSNamesystem: Encountered exception loading fsimage java.io.FileNotFoundException: No valid image files found at org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:165) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:671) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:322) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1052) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741) 25/10/24 00:40:34 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@hadoop1:50070 25/10/24 00:40:34 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system... 25/10/24 00:40:34 INFO impl.MetricsSystemImpl: NameNode metrics system stopped. 25/10/24 00:40:34 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 25/10/24 00:40:34 ERROR namenode.NameNode: Failed to start namenode. java.io.FileNotFoundException: No valid image files found at org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:165) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:671) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:322) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1052) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741) 25/10/24 00:40:34 INFO util.ExitUtil: Exiting with status 1: java.io.FileNotFoundException: No valid image files found 25/10/24 00:40:34 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.10.161 ************************************************************/
10-25
************************************************************/ 2025-08-22 15:17:13,325 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2025-08-22 15:17:13,548 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 2025-08-22 15:17:13,853 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2025-08-22 15:17:14,198 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2025-08-22 15:17:14,198 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2025-08-22 15:17:14,221 INFO org.apache.hadoop.hdfs.server.namenode.NameNodeUtils: fs.defaultFS is hdfs://192.168.88.8:8020 2025-08-22 15:17:14,222 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients should use 192.168.88.8:8020 to access this namenode/service. 2025-08-22 15:17:14,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor 2025-08-22 15:17:14,802 INFO org.apache.hadoop.hdfs.DFSUtil: Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer 2025-08-22 15:17:14,832 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870 2025-08-22 15:17:14,886 INFO org.eclipse.jetty.util.log: Logging initialized @2957ms to org.eclipse.jetty.util.log.Slf4jLog 2025-08-22 15:17:15,216 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /root/hadoop-http-auth-signature-secret 2025-08-22 15:17:15,263 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2025-08-22 15:17:15,286 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2025-08-22 15:17:15,290 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2025-08-22 15:17:15,290 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-08-22 15:17:15,290 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-08-22 15:17:15,300 INFO org.apache.hadoop.http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs 2025-08-22 15:17:15,301 INFO org.apache.hadoop.http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static 2025-08-22 15:17:15,301 INFO org.apache.hadoop.http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context logs 2025-08-22 15:17:15,396 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2025-08-22 15:17:15,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 9870 2025-08-22 15:17:15,452 INFO org.eclipse.jetty.server.Server: jetty-9.4.51.v20230217; built: 2023-02-17T08:19:37.309Z; git: b45c405e4544384de066f814ed42ae3dceacdd49; jvm 1.8.0_401-b10 2025-08-22 15:17:15,528 INFO org.eclipse.jetty.server.session: DefaultSessionIdManager workerName=node0 2025-08-22 15:17:15,529 INFO org.eclipse.jetty.server.session: No SessionScavenger set, using defaults 2025-08-22 15:17:15,532 INFO org.eclipse.jetty.server.session: node0 Scavenging every 660000ms 2025-08-22 15:17:15,579 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /root/hadoop-http-auth-signature-secret 2025-08-22 15:17:15,587 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@465232e9{logs,/logs,file:///export/server/hadoop/logs/,AVAILABLE} 2025-08-22 15:17:15,589 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7486b455{static,/static,file:///export/server/hadoop/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2025-08-22 15:17:15,884 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@2d6c53fc{hdfs,/,file:///export/server/hadoop/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{file:/export/server/hadoop/share/hadoop/hdfs/webapps/hdfs} 2025-08-22 15:17:15,912 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@21d03963{HTTP/1.1, (http/1.1)}{0.0.0.0:9870} 2025-08-22 15:17:15,912 INFO org.eclipse.jetty.server.Server: Started @3984ms 2025-08-22 15:17:17,049 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,054 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,055 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories! 2025-08-22 15:17:17,055 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories! 2025-08-22 15:17:17,073 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,073 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,182 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true 2025-08-22 15:17:17,240 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null 2025-08-22 15:17:17,245 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true 2025-08-22 15:17:17,246 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2025-08-22 15:17:17,263 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isStoragePolicyEnabled = true 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false 2025-08-22 15:17:17,369 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-08-22 15:17:17,868 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000 2025-08-22 15:17:17,868 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2025-08-22 15:17:17,899 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2025-08-22 15:17:17,900 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2025 八月 22 15:17:17 2025-08-22 15:17:17,904 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap 2025-08-22 15:17:17,904 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:17,908 INFO org.apache.hadoop.util.GSet: 2.0% max memory 583.5 MB = 11.7 MB 2025-08-22 15:17:17,909 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries 2025-08-22 15:17:18,057 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Storage policy satisfier is disabled 2025-08-22 15:17:18,058 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable = false 2025-08-22 15:17:18,109 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999 2025-08-22 15:17:18,109 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2025-08-22 15:17:18,109 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2025-08-22 15:17:18,340 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap 2025-08-22 15:17:18,341 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:18,341 INFO org.apache.hadoop.util.GSet: 1.0% max memory 583.5 MB = 5.8 MB 2025-08-22 15:17:18,341 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries 2025-08-22 15:17:18,620 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? true 2025-08-22 15:17:18,621 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true 2025-08-22 15:17:18,621 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true 2025-08-22 15:17:18,621 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10 times 2025-08-22 15:17:18,662 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2025-08-22 15:17:18,666 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: SkipList is disabled 2025-08-22 15:17:18,678 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks 2025-08-22 15:17:18,678 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:18,679 INFO org.apache.hadoop.util.GSet: 0.25% max memory 583.5 MB = 1.5 MB 2025-08-22 15:17:18,679 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries 2025-08-22 15:17:18,695 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2025-08-22 15:17:18,695 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2025-08-22 15:17:18,695 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2025-08-22 15:17:18,704 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled 2025-08-22 15:17:18,705 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2025-08-22 15:17:18,709 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache 2025-08-22 15:17:18,709 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:18,710 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 583.5 MB = 179.3 KB 2025-08-22 15:17:18,710 INFO org.apache.hadoop.util.GSet: capacity = 2^14 = 16384 entries 2025-08-22 15:17:18,804 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/nn/in_use.lock acquired by nodename 1882@master 2025-08-22 15:17:18,811 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1236) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:808) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:694) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:781) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1033) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1008) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1782) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1847) 2025-08-22 15:17:18,830 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext@2d6c53fc{hdfs,/,null,STOPPED}{file:/export/server/hadoop/share/hadoop/hdfs/webapps/hdfs} 2025-08-22 15:17:18,847 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector@21d03963{HTTP/1.1, (http/1.1)}{0.0.0.0:9870} 2025-08-22 15:17:18,847 INFO org.eclipse.jetty.server.session: node0 Stopped scavenging 2025-08-22 15:17:18,848 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@7486b455{static,/static,file:///export/server/hadoop/share/hadoop/hdfs/webapps/static/,STOPPED} 2025-08-22 15:17:18,848 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@465232e9{logs,/logs,file:///export/server/hadoop/logs/,STOPPED} 2025-08-22 15:17:18,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2025-08-22 15:17:18,872 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2025-08-22 15:17:18,872 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2025-08-22 15:17:18,872 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1236) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:808) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:694) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:781) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1033) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1008) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1782) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1847) 2025-08-22 15:17:18,876 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: NameNode is not formatted. 2025-08-22 15:17:18,897 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.88.8 ************************************************************/ [root@master logs]#
08-23
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值