Speed up your eclipse

Speed up your eclipse as a super fast IDE

Follow these steps to speed up your eclipse as a super fast IDE , it works for 32 & 64 bit version eclipse/jdk on my windows 7 x64.

1.Disable antivirus software, or add your jdk & eclipse & workspace & other related folder to white list in your antivirus software.

2.Disable unecessary validations and startup action in eclipse reference.

For validations: on eclipse menu Windows -> Preferences -> Validation, click “Disable All”, and then select the validator which you need. I selected “Classpath Dependency Validator” only.

For startup action: on eclipse menu Windows -> Preferences, type “startup”, select “Startup and Shutdown”, untick the options you don’t need. (Thanks for David’s suggest,  I’ve added the above 2 detail steps for readers)

3.Modify eclipse.ini to set the Xmn(new generation size), Xms and Xmx, enable parallel GC
-vm
C:/jdk1.6.0_25/bin
-startup
plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar
–launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.100.v20110502
-product
org.eclipse.epp.package.jee.product
–launcher.defaultAction
openFile
–launcher.XXMaxPermSize
256M
-showsplash
org.eclipse.platform
–launcher.XXMaxPermSize
256m
–launcher.defaultAction
openFile
-vmargs
-server
-Dosgi.requiredJavaVersion=1.5
-Xmn128m
-Xms1024m
-Xmx1024m
-Xss2m
-XX:PermSize=128m
-XX:MaxPermSize=128m
-XX:+UseParallelGC

4.Create a ram disk(virtual disk) on memory, and put your jdk on the ram disk.
You can use imdisk to create ram disk. Firstly download it from: http://www.ltr-data.se/opencode.html/#ImDisk,  and install it.

Then set the ImDisk driver to auto-load at Command Prompt with administrative privileges (for example right-click, start as Administrator), type:

sc config imdisk start= auto
net start imdisk

create a 200m ram disk as driver n by:
imdisk -a -t vm -s 300m -m n:

then you should format the ram disk manually.

later ,if you would like to remove the ram disk, just run the command:
imdisk -D -m n:

5.Make sure you are using sun jdk, not open jdk/gcj nor other jdk on linux.
If you are not satified with the after the above steps, install jrockit instead of sun jdk, and change some of the vm options by jrocket specification,
jrockit is faster than sun jdk.

Some note:

On my laptop(Aspire 4745G 4G mem ), windows 7 x64, by default, it takes more than 30 seconds to start eclipse.  After the vm options tuned,  takes only 15 seconds.

And after jdk moved to ram disk, it takes 10 seconds to startup.

提高 JAVA IDE 的性能的JVM开关 Submitted by 小天蝎 on 2005, August 18, 2:45 PM. integration 我的本本是p4 1.8G的dell c640 内存1G,eclipse 3.1 + myeclipse 4.0m2 速度还不错。 运行参数如下:eclipse.exe -vmargs -Xverify:none -XX:+UseParallelGC -XX:PermSize=20M -------------- JVM 提供了各种用于调整内存分配和垃圾回收行为的标准开关和非标准开关。其中一些设置可以提高 JAVA IDE 的性能。 注意,由于 -X (尤其是 -XX JVM)开关通常是 JVM 或 JVM 供应商特定的,本部分介绍的开关可用于 Sun Microsystems J2SE 1.4.2。 以下设置在大多数系统上将产生比工厂更好的设置性能。 -vmargs - 表示将后面的所有参数直接传递到所指示的 Java VM。 -Xverify:none - 此开关关闭Java字节码验证,从而加快了类装入的速度,并使得在仅为验证目的而启动的过程中无需装入类。此开关缩短了启动时间,因此没有理由不使用它。 -Xms24m - 此设置指示 Java 虚拟机将其初始堆大小设置为 24 MB。通过指示 JVM 最初应分配给堆的内存数量,可以使 JVM 不必在 IDE 占用较多内存时增加堆大小。 -Xmx96m - 此设置指定 Java 虚拟机应对堆使用的最大内存数量。为此数量设置上限表示 Java 进程消耗的内存数量不得超过可用的物理内存数量。对于具有更多内存的系统可以增加此限制,96 MB 设置有助于确保 IDE 在内存量为 128MB 到 256MB 的系统上能够可靠地执行操作。注意:不要将该值设置为接近或大于系统的物理内存量,否则将在主要回收过程中导致频繁的交换操作。 -XX:PermSize=20m - 此 JVM 开关不仅功能更为强大,而且能够缩短启动时间。该设置用于调整内存"永久区域"(类保存在该区域中)的大小。因此我们向 JVM 提示它将需要的内存量。该设置消除了许多系统启动过程中的主要垃圾收集事件。SunONE Studio 或其它包含更多模块的 IDE 的用户可能希望将该数值设置得更高。 下面列出了其它一些可能对 ECLIPSE 在某些系统(不是所有系统)上的性能产生轻微或明显影响的 JVM 开关。尽管使用它们会产生一定的影响,但仍值得一试。 -XX:CompileThreshold=100 - 此开关将降低启动速度,原因是与不使用此开关相比,HotSpot 能够更快地将更多的方法编译为本地代码。其结果是提高了 IDE 运行时的性能,这是因为更多的 UI 代码将被编译而不是被解释。该值表示方法在被编译前必须被调用的次数。 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC - 如果垃圾回收频繁中断,则请尝试使用这些开关。此开关导致 JVM 对主要垃圾回收事件(如果在多处理器工作站上运行,则也适用于次要回收事件)使用不同的算法,这些算法不会影响整个垃圾回收进程。注意:目前尚不确定此收集器是提高还是降低单处理器计算机的性能。 -XX:+UseParallelGC - 某些测试表明,至少在内存配置相当良好的单处理器系统中,使用此回收算法可以将次要垃圾回收的持续时间减半。注意,这是一个矛盾的问题,事实上此回收器主要适用于具有千兆字节堆的多处理器。尚无可用数据表明它对主要垃圾回收的影响。注意:此回收器与 -XX:+UseConcMarkSweepGC 是互斥的。=====================================================================================================================建议启动参数:c:\eclipse\eclipse.exe -vmargs -Xverify:none -Xms128M -Xmx512M -XX:PermSize=64M -XX:MaxPermSize=128M
/usr/local/jdk1.8.0_341/bin/java -javaagent:/opt/idea-IC-223.8836.41/lib/idea_rt.jar=35029:/opt/idea-IC-223.8836.41/bin -Dfile.encoding=UTF-8 -classpath /usr/local/jdk1.8.0_341/jre/lib/charsets.jar:/usr/local/jdk1.8.0_341/jre/lib/deploy.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/cldrdata.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/dnsns.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/jaccess.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/jfxrt.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/localedata.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/nashorn.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/sunec.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/sunjce_provider.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/sunpkcs11.jar:/usr/local/jdk1.8.0_341/jre/lib/ext/zipfs.jar:/usr/local/jdk1.8.0_341/jre/lib/javaws.jar:/usr/local/jdk1.8.0_341/jre/lib/jce.jar:/usr/local/jdk1.8.0_341/jre/lib/jfr.jar:/usr/local/jdk1.8.0_341/jre/lib/jfxswt.jar:/usr/local/jdk1.8.0_341/jre/lib/jsse.jar:/usr/local/jdk1.8.0_341/jre/lib/management-agent.jar:/usr/local/jdk1.8.0_341/jre/lib/plugin.jar:/usr/local/jdk1.8.0_341/jre/lib/resources.jar:/usr/local/jdk1.8.0_341/jre/lib/rt.jar:/root/IdeaProjects/demo20250411/target/classes:/usr/local/src/repo/com/fasterxml/jackson/core/jackson-databind/2.10.4/jackson-databind-2.10.4.jar:/usr/local/src/repo/org/dom4j/dom4j/2.1.4/dom4j-2.1.4.jar:/usr/local/src/repo/com/fasterxml/jackson/core/jackson-core/2.10.4/jackson-core-2.10.4.jar:/usr/local/src/repo/com/fasterxml/jackson/core/jackson-annotations/2.10.4/jackson-annotations-2.10.4.jar:/usr/local/src/repo/com/fasterxml/jackson/jaxrs/jackson-jaxrs-json-provider/2.10.4/jackson-jaxrs-json-provider-2.10.4.jar:/usr/local/src/repo/com/fasterxml/jackson/jaxrs/jackson-jaxrs-base/2.10.4/jackson-jaxrs-base-2.10.4.jar:/usr/local/src/repo/com/fasterxml/jackson/module/jackson-module-jaxb-annotations/2.10.4/jackson-module-jaxb-annotations-2.10.4.jar:/usr/local/src/repo/jakarta/xml/bind/jakarta.xml.bind-api/2.3.2/jakarta.xml.bind-api-2.3.2.jar:/usr/local/src/repo/jakarta/activation/jakarta.activation-api/1.2.1/jakarta.activation-api-1.2.1.jar:/usr/local/src/repo/com/fasterxml/jackson/dataformat/jackson-dataformat-xml/2.10.4/jackson-dataformat-xml-2.10.4.jar:/usr/local/src/repo/org/codehaus/woodstox/stax2-api/4.2/stax2-api-4.2.jar:/usr/local/src/repo/com/fasterxml/woodstox/woodstox-core/6.2.0/woodstox-core-6.2.0.jar:/usr/local/src/repo/org/scala-lang/scala-reflect/2.12.10/scala-reflect-2.12.10.jar:/usr/local/src/repo/org/scala-lang/scala-compiler/2.12.10/scala-compiler-2.12.10.jar:/usr/local/src/repo/org/scala-lang/modules/scala-xml_2.12/1.0.6/scala-xml_2.12-1.0.6.jar:/usr/local/src/repo/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/usr/local/src/repo/org/apache/kafka/kafka_2.12/2.4.1/kafka_2.12-2.4.1.jar:/usr/local/src/repo/com/fasterxml/jackson/module/jackson-module-scala_2.12/2.10.0/jackson-module-scala_2.12-2.10.0.jar:/usr/local/src/repo/com/fasterxml/jackson/module/jackson-module-paranamer/2.10.0/jackson-module-paranamer-2.10.0.jar:/usr/local/src/repo/com/fasterxml/jackson/dataformat/jackson-dataformat-csv/2.10.0/jackson-dataformat-csv-2.10.0.jar:/usr/local/src/repo/com/fasterxml/jackson/datatype/jackson-datatype-jdk8/2.10.0/jackson-datatype-jdk8-2.10.0.jar:/usr/local/src/repo/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/usr/local/src/repo/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/usr/local/src/repo/org/scala-lang/modules/scala-collection-compat_2.12/2.1.2/scala-collection-compat_2.12-2.1.2.jar:/usr/local/src/repo/org/scala-lang/modules/scala-java8-compat_2.12/0.9.0/scala-java8-compat_2.12-0.9.0.jar:/usr/local/src/repo/com/typesafe/scala-logging/scala-logging_2.12/3.9.2/scala-logging_2.12-3.9.2.jar:/usr/local/src/repo/org/slf4j/slf4j-api/1.7.28/slf4j-api-1.7.28.jar:/usr/local/src/repo/org/apache/zookeeper/zookeeper/3.5.7/zookeeper-3.5.7.jar:/usr/local/src/repo/org/apache/zookeeper/zookeeper-jute/3.5.7/zookeeper-jute-3.5.7.jar:/usr/local/src/repo/io/netty/netty-handler/4.1.45.Final/netty-handler-4.1.45.Final.jar:/usr/local/src/repo/io/netty/netty-common/4.1.45.Final/netty-common-4.1.45.Final.jar:/usr/local/src/repo/io/netty/netty-buffer/4.1.45.Final/netty-buffer-4.1.45.Final.jar:/usr/local/src/repo/io/netty/netty-transport/4.1.45.Final/netty-transport-4.1.45.Final.jar:/usr/local/src/repo/io/netty/netty-resolver/4.1.45.Final/netty-resolver-4.1.45.Final.jar:/usr/local/src/repo/io/netty/netty-codec/4.1.45.Final/netty-codec-4.1.45.Final.jar:/usr/local/src/repo/io/netty/netty-transport-native-epoll/4.1.45.Final/netty-transport-native-epoll-4.1.45.Final.jar:/usr/local/src/repo/io/netty/netty-transport-native-unix-common/4.1.45.Final/netty-transport-native-unix-common-4.1.45.Final.jar:/usr/local/src/repo/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/usr/local/src/repo/org/apache/flink/flink-connector-jdbc_2.12/1.14.0/flink-connector-jdbc_2.12-1.14.0.jar:/usr/local/src/repo/com/h2database/h2/1.4.200/h2-1.4.200.jar:/usr/local/src/repo/org/apache/flink/flink-shaded-force-shading/14.0/flink-shaded-force-shading-14.0.jar:/usr/local/src/repo/org/apache/flink/flink-runtime-web_2.12/1.14.0/flink-runtime-web_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-runtime/1.14.0/flink-runtime-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-rpc-core/1.14.0/flink-rpc-core-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-rpc-akka-loader/1.14.0/flink-rpc-akka-loader-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-queryable-state-client-java/1.14.0/flink-queryable-state-client-java-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-hadoop-fs/1.14.0/flink-hadoop-fs-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-shaded-zookeeper-3/3.4.14-14.0/flink-shaded-zookeeper-3-3.4.14-14.0.jar:/usr/local/src/repo/org/javassist/javassist/3.24.0-GA/javassist-3.24.0-GA.jar:/usr/local/src/repo/org/apache/flink/flink-shaded-netty/4.1.65.Final-14.0/flink-shaded-netty-4.1.65.Final-14.0.jar:/usr/local/src/repo/org/apache/flink/flink-shaded-guava/30.1.1-jre-14.0/flink-shaded-guava-30.1.1-jre-14.0.jar:/usr/local/src/repo/org/apache/flink/flink-shaded-jackson/2.12.4-14.0/flink-shaded-jackson-2.12.4-14.0.jar:/usr/local/src/repo/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/usr/local/src/repo/org/apache/flink/flink-clients_2.12/1.14.0/flink-clients_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-core/1.14.0/flink-core-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-annotations/1.14.0/flink-annotations-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-metrics-core/1.14.0/flink-metrics-core-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-shaded-asm-7/7.1-14.0/flink-shaded-asm-7-7.1-14.0.jar:/usr/local/src/repo/com/esotericsoftware/kryo/kryo/2.24.0/kryo-2.24.0.jar:/usr/local/src/repo/com/esotericsoftware/minlog/minlog/1.2/minlog-1.2.jar:/usr/local/src/repo/org/objenesis/objenesis/2.1/objenesis-2.1.jar:/usr/local/src/repo/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/usr/local/src/repo/org/apache/commons/commons-compress/1.21/commons-compress-1.21.jar:/usr/local/src/repo/org/apache/flink/flink-optimizer/1.14.0/flink-optimizer-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-java/1.14.0/flink-java-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-streaming-java_2.12/1.14.0/flink-streaming-java_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-file-sink-common/1.14.0/flink-file-sink-common-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-streaming-scala_2.12/1.14.0/flink-streaming-scala_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-scala_2.12/1.14.0/flink-scala_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-connector-kafka_2.12/1.14.0/flink-connector-kafka_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-connector-base/1.14.0/flink-connector-base-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-sql-connector-hbase-2.2_2.12/1.14.0/flink-sql-connector-hbase-2.2_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-planner_2.12/1.14.0/flink-table-planner_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-common/1.14.0/flink-table-common-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-api-java/1.14.0/flink-table-api-java-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-api-scala_2.12/1.14.0/flink-table-api-scala_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-api-java-bridge_2.12/1.14.0/flink-table-api-java-bridge_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-runtime_2.12/1.14.0/flink-table-runtime_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-code-splitter/1.14.0/flink-table-code-splitter-1.14.0.jar:/usr/local/src/repo/org/codehaus/janino/janino/3.0.11/janino-3.0.11.jar:/usr/local/src/repo/org/apache/calcite/avatica/avatica-core/1.17.0/avatica-core-1.17.0.jar:/usr/local/src/repo/org/apache/flink/flink-json/1.14.0/flink-json-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-table-api-scala-bridge_2.12/1.14.0/flink-table-api-scala-bridge_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-connector-redis_2.11/1.1.5/flink-connector-redis_2.11-1.1.5.jar:/usr/local/src/repo/redis/clients/jedis/2.8.0/jedis-2.8.0.jar:/usr/local/src/repo/org/apache/commons/commons-pool2/2.3/commons-pool2-2.3.jar:/usr/local/src/repo/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar:/usr/local/src/repo/log4j/log4j/1.2.17/log4j-1.2.17.jar:/usr/local/src/repo/org/apache/flink/force-shading/1.1.5/force-shading-1.1.5.jar:/usr/local/src/repo/org/apache/commons/commons-lang3/3.9/commons-lang3-3.9.jar:/usr/local/src/repo/org/apache/flink/flink-connector-hive_2.12/1.14.0/flink-connector-hive_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-connector-files/1.14.0/flink-connector-files-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-connector-hbase-2.2_2.12/1.14.0/flink-connector-hbase-2.2_2.12-1.14.0.jar:/usr/local/src/repo/org/apache/flink/flink-connector-hbase-base_2.12/1.14.0/flink-connector-hbase-base_2.12-1.14.0.jar:/usr/local/src/repo/io/netty/netty-all/4.1.46.Final/netty-all-4.1.46.Final.jar:/usr/local/src/repo/com/alibaba/fastjson/1.2.62/fastjson-1.2.62.jar:/usr/local/src/repo/org/apache/kafka/kafka-clients/2.6.0/kafka-clients-2.6.0.jar:/usr/local/src/repo/com/github/luben/zstd-jni/1.4.4-7/zstd-jni-1.4.4-7.jar:/usr/local/src/repo/org/lz4/lz4-java/1.7.1/lz4-java-1.7.1.jar:/usr/local/src/repo/org/xerial/snappy/snappy-java/1.1.7.3/snappy-java-1.1.7.3.jar:/usr/local/src/repo/mysql/mysql-connector-java/5.1.47/mysql-connector-java-5.1.47.jar:/usr/local/src/repo/org/apache/spark/spark-graphx_2.12/3.1.1/spark-graphx_2.12-3.1.1.jar:/usr/local/src/repo/org/apache/spark/spark-mllib-local_2.12/3.1.1/spark-mllib-local_2.12-3.1.1.jar:/usr/local/src/repo/org/apache/xbean/xbean-asm7-shaded/4.15/xbean-asm7-shaded-4.15.jar:/usr/local/src/repo/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar:/usr/local/src/repo/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/usr/local/src/repo/org/apache/spark/spark-tags_2.12/3.1.1/spark-tags_2.12-3.1.1.jar:/usr/local/src/repo/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/usr/local/src/repo/org/apache/spark/spark-mllib_2.12/3.1.1/spark-mllib_2.12-3.1.1.jar:/usr/local/src/repo/org/scala-lang/modules/scala-parser-combinators_2.12/1.1.2/scala-parser-combinators_2.12-1.1.2.jar:/usr/local/src/repo/org/apache/spark/spark-streaming_2.12/3.1.1/spark-streaming_2.12-3.1.1.jar:/usr/local/src/repo/org/scalanlp/breeze_2.12/1.0/breeze_2.12-1.0.jar:/usr/local/src/repo/org/scalanlp/breeze-macros_2.12/1.0/breeze-macros_2.12-1.0.jar:/usr/local/src/repo/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/usr/local/src/repo/com/github/wendykierp/JTransforms/3.1/JTransforms-3.1.jar:/usr/local/src/repo/pl/edu/icm/JLargeArrays/1.5/JLargeArrays-1.5.jar:/usr/local/src/repo/com/chuusai/shapeless_2.12/2.3.3/shapeless_2.12-2.3.3.jar:/usr/local/src/repo/org/typelevel/macro-compat_2.12/1.1.1/macro-compat_2.12-1.1.1.jar:/usr/local/src/repo/org/typelevel/spire_2.12/0.17.0-M1/spire_2.12-0.17.0-M1.jar:/usr/local/src/repo/org/typelevel/spire-macros_2.12/0.17.0-M1/spire-macros_2.12-0.17.0-M1.jar:/usr/local/src/repo/org/typelevel/spire-platform_2.12/0.17.0-M1/spire-platform_2.12-0.17.0-M1.jar:/usr/local/src/repo/org/typelevel/spire-util_2.12/0.17.0-M1/spire-util_2.12-0.17.0-M1.jar:/usr/local/src/repo/org/typelevel/machinist_2.12/0.6.8/machinist_2.12-0.6.8.jar:/usr/local/src/repo/org/typelevel/algebra_2.12/2.0.0-M2/algebra_2.12-2.0.0-M2.jar:/usr/local/src/repo/org/typelevel/cats-kernel_2.12/2.0.0-M4/cats-kernel_2.12-2.0.0-M4.jar:/usr/local/src/repo/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/usr/local/src/repo/org/glassfish/jaxb/jaxb-runtime/2.3.2/jaxb-runtime-2.3.2.jar:/usr/local/src/repo/com/sun/istack/istack-commons-runtime/3.0.8/istack-commons-runtime-3.0.8.jar:/usr/local/src/repo/org/apache/spark/spark-core_2.12/3.1.1/spark-core_2.12-3.1.1.jar:/usr/local/src/repo/com/thoughtworks/paranamer/paranamer/2.8/paranamer-2.8.jar:/usr/local/src/repo/org/apache/avro/avro/1.8.2/avro-1.8.2.jar:/usr/local/src/repo/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/usr/local/src/repo/org/tukaani/xz/1.5/xz-1.5.jar:/usr/local/src/repo/org/apache/avro/avro-mapred/1.8.2/avro-mapred-1.8.2-hadoop2.jar:/usr/local/src/repo/org/apache/avro/avro-ipc/1.8.2/avro-ipc-1.8.2.jar:/usr/local/src/repo/com/twitter/chill_2.12/0.9.5/chill_2.12-0.9.5.jar:/usr/local/src/repo/com/esotericsoftware/kryo-shaded/4.0.2/kryo-shaded-4.0.2.jar:/usr/local/src/repo/com/esotericsoftware/minlog/1.3.0/minlog-1.3.0.jar:/usr/local/src/repo/com/twitter/chill-java/0.9.5/chill-java-0.9.5.jar:/usr/local/src/repo/org/apache/spark/spark-launcher_2.12/3.1.1/spark-launcher_2.12-3.1.1.jar:/usr/local/src/repo/org/apache/spark/spark-kvstore_2.12/3.1.1/spark-kvstore_2.12-3.1.1.jar:/usr/local/src/repo/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/usr/local/src/repo/org/apache/spark/spark-network-common_2.12/3.1.1/spark-network-common_2.12-3.1.1.jar:/usr/local/src/repo/org/apache/spark/spark-network-shuffle_2.12/3.1.1/spark-network-shuffle_2.12-3.1.1.jar:/usr/local/src/repo/org/apache/spark/spark-unsafe_2.12/3.1.1/spark-unsafe_2.12-3.1.1.jar:/usr/local/src/repo/javax/activation/activation/1.1.1/activation-1.1.1.jar:/usr/local/src/repo/org/apache/curator/curator-recipes/2.13.0/curator-recipes-2.13.0.jar:/usr/local/src/repo/jakarta/servlet/jakarta.servlet-api/4.0.3/jakarta.servlet-api-4.0.3.jar:/usr/local/src/repo/org/apache/commons/commons-text/1.6/commons-text-1.6.jar:/usr/local/src/repo/org/slf4j/jul-to-slf4j/1.7.30/jul-to-slf4j-1.7.30.jar:/usr/local/src/repo/org/slf4j/jcl-over-slf4j/1.7.30/jcl-over-slf4j-1.7.30.jar:/usr/local/src/repo/com/ning/compress-lzf/1.0.3/compress-lzf-1.0.3.jar:/usr/local/src/repo/org/roaringbitmap/RoaringBitmap/0.9.0/RoaringBitmap-0.9.0.jar:/usr/local/src/repo/org/roaringbitmap/shims/0.9.0/shims-0.9.0.jar:/usr/local/src/repo/commons-net/commons-net/3.1/commons-net-3.1.jar:/usr/local/src/repo/org/json4s/json4s-jackson_2.12/3.7.0-M5/json4s-jackson_2.12-3.7.0-M5.jar:/usr/local/src/repo/org/json4s/json4s-core_2.12/3.7.0-M5/json4s-core_2.12-3.7.0-M5.jar:/usr/local/src/repo/org/json4s/json4s-ast_2.12/3.7.0-M5/json4s-ast_2.12-3.7.0-M5.jar:/usr/local/src/repo/org/json4s/json4s-scalap_2.12/3.7.0-M5/json4s-scalap_2.12-3.7.0-M5.jar:/usr/local/src/repo/org/glassfish/jersey/core/jersey-client/2.30/jersey-client-2.30.jar:/usr/local/src/repo/jakarta/ws/rs/jakarta.ws.rs-api/2.1.6/jakarta.ws.rs-api-2.1.6.jar:/usr/local/src/repo/org/glassfish/hk2/external/jakarta.inject/2.6.1/jakarta.inject-2.6.1.jar:/usr/local/src/repo/org/glassfish/jersey/core/jersey-common/2.30/jersey-common-2.30.jar:/usr/local/src/repo/jakarta/annotation/jakarta.annotation-api/1.3.5/jakarta.annotation-api-1.3.5.jar:/usr/local/src/repo/org/glassfish/hk2/osgi-resource-locator/1.0.3/osgi-resource-locator-1.0.3.jar:/usr/local/src/repo/org/glassfish/jersey/core/jersey-server/2.30/jersey-server-2.30.jar:/usr/local/src/repo/org/glassfish/jersey/media/jersey-media-jaxb/2.30/jersey-media-jaxb-2.30.jar:/usr/local/src/repo/jakarta/validation/jakarta.validation-api/2.0.2/jakarta.validation-api-2.0.2.jar:/usr/local/src/repo/org/glassfish/jersey/containers/jersey-container-servlet/2.30/jersey-container-servlet-2.30.jar:/usr/local/src/repo/org/glassfish/jersey/containers/jersey-container-servlet-core/2.30/jersey-container-servlet-core-2.30.jar:/usr/local/src/repo/org/glassfish/jersey/inject/jersey-hk2/2.30/jersey-hk2-2.30.jar:/usr/local/src/repo/org/glassfish/hk2/hk2-locator/2.6.1/hk2-locator-2.6.1.jar:/usr/local/src/repo/org/glassfish/hk2/external/aopalliance-repackaged/2.6.1/aopalliance-repackaged-2.6.1.jar:/usr/local/src/repo/org/glassfish/hk2/hk2-api/2.6.1/hk2-api-2.6.1.jar:/usr/local/src/repo/org/glassfish/hk2/hk2-utils/2.6.1/hk2-utils-2.6.1.jar:/usr/local/src/repo/com/clearspring/analytics/stream/2.9.6/stream-2.9.6.jar:/usr/local/src/repo/io/dropwizard/metrics/metrics-core/4.1.1/metrics-core-4.1.1.jar:/usr/local/src/repo/io/dropwizard/metrics/metrics-jvm/4.1.1/metrics-jvm-4.1.1.jar:/usr/local/src/repo/io/dropwizard/metrics/metrics-json/4.1.1/metrics-json-4.1.1.jar:/usr/local/src/repo/io/dropwizard/metrics/metrics-graphite/4.1.1/metrics-graphite-4.1.1.jar:/usr/local/src/repo/io/dropwizard/metrics/metrics-jmx/4.1.1/metrics-jmx-4.1.1.jar:/usr/local/src/repo/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar:/usr/local/src/repo/oro/oro/2.0.8/oro-2.0.8.jar:/usr/local/src/repo/net/razorvine/pyrolite/4.30/pyrolite-4.30.jar:/usr/local/src/repo/net/sf/py4j/py4j/0.10.9/py4j-0.10.9.jar:/usr/local/src/repo/org/apache/commons/commons-crypto/1.1.0/commons-crypto-1.1.0.jar:/usr/local/src/repo/org/apache/spark/spark-sql_2.12/3.1.1/spark-sql_2.12-3.1.1.jar:/usr/local/src/repo/com/univocity/univocity-parsers/2.9.1/univocity-parsers-2.9.1.jar:/usr/local/src/repo/org/apache/spark/spark-sketch_2.12/3.1.1/spark-sketch_2.12-3.1.1.jar:/usr/local/src/repo/org/apache/spark/spark-catalyst_2.12/3.1.1/spark-catalyst_2.12-3.1.1.jar:/usr/local/src/repo/org/codehaus/janino/commons-compiler/3.0.16/commons-compiler-3.0.16.jar:/usr/local/src/repo/org/antlr/antlr4-runtime/4.8-1/antlr4-runtime-4.8-1.jar:/usr/local/src/repo/org/apache/arrow/arrow-vector/2.0.0/arrow-vector-2.0.0.jar:/usr/local/src/repo/org/apache/arrow/arrow-format/2.0.0/arrow-format-2.0.0.jar:/usr/local/src/repo/org/apache/arrow/arrow-memory-core/2.0.0/arrow-memory-core-2.0.0.jar:/usr/local/src/repo/com/google/flatbuffers/flatbuffers-java/1.9.0/flatbuffers-java-1.9.0.jar:/usr/local/src/repo/org/apache/arrow/arrow-memory-netty/2.0.0/arrow-memory-netty-2.0.0.jar:/usr/local/src/repo/org/apache/orc/orc-core/1.5.12/orc-core-1.5.12.jar:/usr/local/src/repo/org/apache/orc/orc-shims/1.5.12/orc-shims-1.5.12.jar:/usr/local/src/repo/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/usr/local/src/repo/io/airlift/aircompressor/0.10/aircompressor-0.10.jar:/usr/local/src/repo/org/threeten/threeten-extra/1.5.0/threeten-extra-1.5.0.jar:/usr/local/src/repo/org/apache/orc/orc-mapreduce/1.5.12/orc-mapreduce-1.5.12.jar:/usr/local/src/repo/org/apache/hive/hive-storage-api/2.7.2/hive-storage-api-2.7.2.jar:/usr/local/src/repo/org/apache/parquet/parquet-column/1.10.1/parquet-column-1.10.1.jar:/usr/local/src/repo/org/apache/parquet/parquet-common/1.10.1/parquet-common-1.10.1.jar:/usr/local/src/repo/org/apache/parquet/parquet-encoding/1.10.1/parquet-encoding-1.10.1.jar:/usr/local/src/repo/org/apache/parquet/parquet-hadoop/1.10.1/parquet-hadoop-1.10.1.jar:/usr/local/src/repo/org/apache/parquet/parquet-format/2.4.0/parquet-format-2.4.0.jar:/usr/local/src/repo/org/apache/parquet/parquet-jackson/1.10.1/parquet-jackson-1.10.1.jar:/usr/local/src/repo/org/apache/spark/spark-hive_2.12/3.1.1/spark-hive_2.12-3.1.1.jar:/usr/local/src/repo/org/apache/hive/hive-common/2.3.7/hive-common-2.3.7.jar:/usr/local/src/repo/jline/jline/2.12/jline-2.12.jar:/usr/local/src/repo/com/tdunning/json/1.8/json-1.8.jar:/usr/local/src/repo/com/github/joshelser/dropwizard-metrics-hadoop-metrics2-reporter/0.1.2/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/usr/local/src/repo/org/apache/hive/hive-exec/2.3.7/hive-exec-2.3.7-core.jar:/usr/local/src/repo/org/apache/hive/hive-vector-code-gen/2.3.7/hive-vector-code-gen-2.3.7.jar:/usr/local/src/repo/org/apache/velocity/velocity/1.5/velocity-1.5.jar:/usr/local/src/repo/org/antlr/antlr-runtime/3.5.2/antlr-runtime-3.5.2.jar:/usr/local/src/repo/org/antlr/ST4/4.0.4/ST4-4.0.4.jar:/usr/local/src/repo/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/usr/local/src/repo/org/apache/hive/hive-metastore/2.3.7/hive-metastore-2.3.7.jar:/usr/local/src/repo/javolution/javolution/5.5.1/javolution-5.5.1.jar:/usr/local/src/repo/com/jolbox/bonecp/0.8.0.RELEASE/bonecp-0.8.0.RELEASE.jar:/usr/local/src/repo/com/zaxxer/HikariCP/2.5.1/HikariCP-2.5.1.jar:/usr/local/src/repo/org/datanucleus/datanucleus-api-jdo/4.2.4/datanucleus-api-jdo-4.2.4.jar:/usr/local/src/repo/org/datanucleus/datanucleus-rdbms/4.1.19/datanucleus-rdbms-4.1.19.jar:/usr/local/src/repo/commons-pool/commons-pool/1.5.4/commons-pool-1.5.4.jar:/usr/local/src/repo/commons-dbcp/commons-dbcp/1.4/commons-dbcp-1.4.jar:/usr/local/src/repo/javax/jdo/jdo-api/3.0.1/jdo-api-3.0.1.jar:/usr/local/src/repo/javax/transaction/jta/1.1/jta-1.1.jar:/usr/local/src/repo/org/datanucleus/javax.jdo/3.2.0-m3/javax.jdo-3.2.0-m3.jar:/usr/local/src/repo/javax/transaction/transaction-api/1.1/transaction-api-1.1.jar:/usr/local/src/repo/org/apache/hive/hive-serde/2.3.7/hive-serde-2.3.7.jar:/usr/local/src/repo/org/apache/hive/hive-shims/2.3.7/hive-shims-2.3.7.jar:/usr/local/src/repo/org/apache/hive/shims/hive-shims-common/2.3.7/hive-shims-common-2.3.7.jar:/usr/local/src/repo/org/apache/hive/shims/hive-shims-0.23/2.3.7/hive-shims-0.23-2.3.7.jar:/usr/local/src/repo/org/apache/hive/shims/hive-shims-scheduler/2.3.7/hive-shims-scheduler-2.3.7.jar:/usr/local/src/repo/org/apache/hive/hive-llap-common/2.3.7/hive-llap-common-2.3.7.jar:/usr/local/src/repo/org/apache/hive/hive-llap-client/2.3.7/hive-llap-client-2.3.7.jar:/usr/local/src/repo/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/usr/local/src/repo/commons-logging/commons-logging/1.0.4/commons-logging-1.0.4.jar:/usr/local/src/repo/org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.jar:/usr/local/src/repo/org/apache/httpcomponents/httpcore/4.4.10/httpcore-4.4.10.jar:/usr/local/src/repo/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/usr/local/src/repo/commons-codec/commons-codec/1.10/commons-codec-1.10.jar:/usr/local/src/repo/joda-time/joda-time/2.10.5/joda-time-2.10.5.jar:/usr/local/src/repo/org/jodd/jodd-core/3.5.2/jodd-core-3.5.2.jar:/usr/local/src/repo/org/datanucleus/datanucleus-core/4.1.17/datanucleus-core-4.1.17.jar:/usr/local/src/repo/org/apache/thrift/libthrift/0.12.0/libthrift-0.12.0.jar:/usr/local/src/repo/org/apache/thrift/libfb303/0.9.3/libfb303-0.9.3.jar:/usr/local/src/repo/org/apache/derby/derby/10.12.1.1/derby-10.12.1.1.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-client/3.1.3/hadoop-client-3.1.3.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-common/3.1.3/hadoop-common-3.1.3.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-servlet/9.3.24.v20180605/jetty-servlet-9.3.24.v20180605.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-security/9.3.24.v20180605/jetty-security-9.3.24.v20180605.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-webapp/9.3.24.v20180605/jetty-webapp-9.3.24.v20180605.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-xml/9.3.24.v20180605/jetty-xml-9.3.24.v20180605.jar:/usr/local/src/repo/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/usr/local/src/repo/com/sun/jersey/jersey-servlet/1.19/jersey-servlet-1.19.jar:/usr/local/src/repo/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar:/usr/local/src/repo/org/apache/commons/commons-configuration2/2.1.1/commons-configuration2-2.1.1.jar:/usr/local/src/repo/com/google/re2j/re2j/1.1/re2j-1.1.jar:/usr/local/src/repo/org/apache/curator/curator-client/2.13.0/curator-client-2.13.0.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-hdfs-client/3.1.3/hadoop-hdfs-client-3.1.3.jar:/usr/local/src/repo/com/squareup/okhttp/okhttp/2.7.5/okhttp-2.7.5.jar:/usr/local/src/repo/com/squareup/okio/okio/1.6.0/okio-1.6.0.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-yarn-api/3.1.3/hadoop-yarn-api-3.1.3.jar:/usr/local/src/repo/javax/xml/bind/jaxb-api/2.2.11/jaxb-api-2.2.11.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-yarn-client/3.1.3/hadoop-yarn-client-3.1.3.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.3/hadoop-mapreduce-client-core-3.1.3.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-yarn-common/3.1.3/hadoop-yarn-common-3.1.3.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-util/9.3.24.v20180605/jetty-util-9.3.24.v20180605.jar:/usr/local/src/repo/com/sun/jersey/jersey-client/1.19/jersey-client-1.19.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.3/hadoop-mapreduce-client-jobclient-3.1.3.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-mapreduce-client-common/3.1.3/hadoop-mapreduce-client-common-3.1.3.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-annotations/3.1.3/hadoop-annotations-3.1.3.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-auth/3.1.3/hadoop-auth-3.1.3.jar:/usr/local/src/repo/com/nimbusds/nimbus-jose-jwt/4.41.1/nimbus-jose-jwt-4.41.1.jar:/usr/local/src/repo/com/github/stephenc/jcip/jcip-annotations/1.0-1/jcip-annotations-1.0-1.jar:/usr/local/src/repo/net/minidev/json-smart/2.3/json-smart-2.3.jar:/usr/local/src/repo/net/minidev/accessors-smart/1.2/accessors-smart-1.2.jar:/usr/local/src/repo/org/ow2/asm/asm/5.0.4/asm-5.0.4.jar:/usr/local/src/repo/org/apache/curator/curator-framework/2.13.0/curator-framework-2.13.0.jar:/usr/local/src/repo/org/apache/kerby/kerb-simplekdc/1.0.1/kerb-simplekdc-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-client/1.0.1/kerb-client-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerby-config/1.0.1/kerby-config-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-core/1.0.1/kerb-core-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerby-pkix/1.0.1/kerby-pkix-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerby-asn1/1.0.1/kerby-asn1-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerby-util/1.0.1/kerby-util-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-common/1.0.1/kerb-common-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-crypto/1.0.1/kerb-crypto-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-util/1.0.1/kerb-util-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/token-provider/1.0.1/token-provider-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-admin/1.0.1/kerb-admin-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-server/1.0.1/kerb-server-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerb-identity/1.0.1/kerb-identity-1.0.1.jar:/usr/local/src/repo/org/apache/kerby/kerby-xdr/1.0.1/kerby-xdr-1.0.1.jar:/usr/local/src/repo/com/google/guava/guava/27.0-jre/guava-27.0-jre.jar:/usr/local/src/repo/com/google/guava/failureaccess/1.0/failureaccess-1.0.jar:/usr/local/src/repo/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/local/src/repo/org/checkerframework/checker-qual/2.5.2/checker-qual-2.5.2.jar:/usr/local/src/repo/com/google/errorprone/error_prone_annotations/2.2.0/error_prone_annotations-2.2.0.jar:/usr/local/src/repo/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar:/usr/local/src/repo/org/codehaus/mojo/animal-sniffer-annotations/1.17/animal-sniffer-annotations-1.17.jar:/usr/local/src/repo/org/apache/hbase/hbase-mapreduce/2.2.3/hbase-mapreduce-2.2.3.jar:/usr/local/src/repo/org/apache/hbase/thirdparty/hbase-shaded-miscellaneous/2.2.1/hbase-shaded-miscellaneous-2.2.1.jar:/usr/local/src/repo/org/apache/hbase/thirdparty/hbase-shaded-netty/2.2.1/hbase-shaded-netty-2.2.1.jar:/usr/local/src/repo/org/apache/hbase/thirdparty/hbase-shaded-protobuf/2.2.1/hbase-shaded-protobuf-2.2.1.jar:/usr/local/src/repo/org/apache/hbase/hbase-common/2.2.3/hbase-common-2.2.3.jar:/usr/local/src/repo/com/github/stephenc/findbugs/findbugs-annotations/1.3.9-1/findbugs-annotations-1.3.9-1.jar:/usr/local/src/repo/org/apache/hbase/hbase-zookeeper/2.2.3/hbase-zookeeper-2.2.3.jar:/usr/local/src/repo/org/apache/hbase/hbase-protocol/2.2.3/hbase-protocol-2.2.3.jar:/usr/local/src/repo/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/usr/local/src/repo/org/apache/hbase/hbase-protocol-shaded/2.2.3/hbase-protocol-shaded-2.2.3.jar:/usr/local/src/repo/org/apache/hbase/hbase-metrics/2.2.3/hbase-metrics-2.2.3.jar:/usr/local/src/repo/org/apache/hbase/hbase-metrics-api/2.2.3/hbase-metrics-api-2.2.3.jar:/usr/local/src/repo/org/apache/htrace/htrace-core4/4.2.0-incubating/htrace-core4-4.2.0-incubating.jar:/usr/local/src/repo/org/apache/hbase/hbase-hadoop-compat/2.2.3/hbase-hadoop-compat-2.2.3.jar:/usr/local/src/repo/org/apache/hbase/hbase-hadoop2-compat/2.2.3/hbase-hadoop2-compat-2.2.3.jar:/usr/local/src/repo/org/apache/hbase/hbase-server/2.2.3/hbase-server-2.2.3.jar:/usr/local/src/repo/org/apache/hbase/hbase-http/2.2.3/hbase-http-2.2.3.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-util-ajax/9.3.27.v20190418/jetty-util-ajax-9.3.27.v20190418.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-http/9.3.27.v20190418/jetty-http-9.3.27.v20190418.jar:/usr/local/src/repo/javax/ws/rs/javax.ws.rs-api/2.0.1/javax.ws.rs-api-2.0.1.jar:/usr/local/src/repo/org/apache/hbase/hbase-procedure/2.2.3/hbase-procedure-2.2.3.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-server/9.3.27.v20190418/jetty-server-9.3.27.v20190418.jar:/usr/local/src/repo/org/eclipse/jetty/jetty-io/9.3.27.v20190418/jetty-io-9.3.27.v20190418.jar:/usr/local/src/repo/org/glassfish/web/javax.servlet.jsp/2.3.2/javax.servlet.jsp-2.3.2.jar:/usr/local/src/repo/org/glassfish/javax.el/3.0.1-b12/javax.el-3.0.1-b12.jar:/usr/local/src/repo/javax/servlet/jsp/javax.servlet.jsp-api/2.3.1/javax.servlet.jsp-api-2.3.1.jar:/usr/local/src/repo/org/jamon/jamon-runtime/2.4.1/jamon-runtime-2.4.1.jar:/usr/local/src/repo/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar:/usr/local/src/repo/com/lmax/disruptor/3.3.6/disruptor-3.3.6.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-distcp/2.8.5/hadoop-distcp-2.8.5.jar:/usr/local/src/repo/org/apache/hbase/hbase-replication/2.2.3/hbase-replication-2.2.3.jar:/usr/local/src/repo/commons-io/commons-io/2.5/commons-io-2.5.jar:/usr/local/src/repo/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5.jar:/usr/local/src/repo/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/usr/local/src/repo/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/usr/local/src/repo/com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar:/usr/local/src/repo/com/sun/jersey/jersey-server/1.9/jersey-server-1.9.jar:/usr/local/src/repo/asm/asm/3.1/asm-3.1.jar:/usr/local/src/repo/commons-daemon/commons-daemon/1.0.13/commons-daemon-1.0.13.jar:/usr/local/src/repo/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/usr/local/src/repo/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/usr/local/src/repo/org/apache/hbase/hbase-client/2.2.3/hbase-client-2.2.3.jar:/usr/local/src/repo/org/jruby/jcodings/jcodings/1.0.18/jcodings-1.0.18.jar:/usr/local/src/repo/org/jruby/joni/joni/2.1.11/joni-2.1.11.jar:/usr/local/src/repo/ru/yandex/clickhouse/clickhouse-jdbc/0.3.2/clickhouse-jdbc-0.3.2.jar:/usr/local/src/repo/com/clickhouse/clickhouse-http-client/0.3.2/clickhouse-http-client-0.3.2.jar:/usr/local/src/repo/com/clickhouse/clickhouse-client/0.3.2/clickhouse-client-0.3.2.jar:/usr/local/src/repo/com/google/code/gson/gson/2.8.8/gson-2.8.8.jar:/usr/local/src/repo/org/apache/httpcomponents/httpmime/4.5.13/httpmime-4.5.13.jar:/opt/scala-2.12.10/lib/scala-parser-combinators_2.12-1.0.7.jar:/opt/scala-2.12.10/lib/scala-xml_2.12-1.0.6.jar:/opt/scala-2.12.10/lib/scala-swing_2.12-2.0.3.jar:/opt/scala-2.12.10/lib/scala-reflect.jar:/opt/scala-2.12.10/lib/scala-library.jar gs8.shujuwaqu2 log4j:WARN No appenders could be found for logger (org.apache.hadoop.hive.conf.HiveConf). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 25/10/09 15:19:59 WARN Utils: Your hostname, pbcp resolves to a loopback address: 127.0.1.1; using 192.168.75.3 instead (on interface ens33) 25/10/09 15:19:59 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 25/10/09 15:19:59 INFO SparkContext: Running Spark version 3.1.1 25/10/09 15:20:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 25/10/09 15:20:00 INFO ResourceUtils: ============================================================== 25/10/09 15:20:00 INFO ResourceUtils: No custom resources configured for spark.driver. 25/10/09 15:20:00 INFO ResourceUtils: ============================================================== 25/10/09 15:20:00 INFO SparkContext: Submitted application: RandomForestModel 25/10/09 15:20:00 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0) 25/10/09 15:20:00 INFO ResourceProfile: Limiting resource is cpu 25/10/09 15:20:00 INFO ResourceProfileManager: Added ResourceProfile id: 0 25/10/09 15:20:00 INFO SecurityManager: Changing view acls to: root 25/10/09 15:20:00 INFO SecurityManager: Changing modify acls to: root 25/10/09 15:20:00 INFO SecurityManager: Changing view acls groups to: 25/10/09 15:20:00 INFO SecurityManager: Changing modify acls groups to: 25/10/09 15:20:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 25/10/09 15:20:00 INFO Utils: Successfully started service 'sparkDriver' on port 39167. 25/10/09 15:20:00 INFO SparkEnv: Registering MapOutputTracker 25/10/09 15:20:00 INFO SparkEnv: Registering BlockManagerMaster 25/10/09 15:20:00 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 25/10/09 15:20:00 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 25/10/09 15:20:00 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 25/10/09 15:20:00 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7824d7b3-541f-4ec6-88e6-298ad263b86a 25/10/09 15:20:00 INFO MemoryStore: MemoryStore started with capacity 1948.2 MiB 25/10/09 15:20:00 INFO SparkEnv: Registering OutputCommitCoordinator 25/10/09 15:20:01 INFO Utils: Successfully started service 'SparkUI' on port 4040. 25/10/09 15:20:01 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.75.3:4040 25/10/09 15:20:01 INFO Executor: Starting executor ID driver on host 192.168.75.3 25/10/09 15:20:01 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33353. 25/10/09 15:20:01 INFO NettyBlockTransferService: Server created on 192.168.75.3:33353 25/10/09 15:20:01 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 25/10/09 15:20:01 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.75.3, 33353, None) 25/10/09 15:20:01 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.75.3:33353 with 1948.2 MiB RAM, BlockManagerId(driver, 192.168.75.3, 33353, None) 25/10/09 15:20:01 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.75.3, 33353, None) 25/10/09 15:20:01 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.75.3, 33353, None) 25/10/09 15:20:02 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/lib doesn't exist 25/10/09 15:20:02 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/jdbc doesn't exist 25/10/09 15:20:03 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/lib doesn't exist 25/10/09 15:20:03 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/jdbc doesn't exist 25/10/09 15:20:03 INFO Persistence: Property datanucleus.metadata.validate unknown - will be ignored 25/10/09 15:20:03 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 25/10/09 15:20:03 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored 25/10/09 15:20:04 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/lib doesn't exist 25/10/09 15:20:04 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/jdbc doesn't exist 随机森林模型训练完成! 预测完成,前5条结果: +-----------------+--------------------+ |machine_record_id|machine_record_state| +-----------------+--------------------+ +-----------------+--------------------+ 预测结果已成功写入 MySQL 表 ml_result! Hive 中 dwd.fact_machine_learning_data_test 预测完毕。 请在 MySQL 中执行以下查询语句查看结果: SELECT * FROM ml_result WHERE machine_record_id IN (1,8,20,28,36); +-----------------+--------------------+ |machine_record_id|machine_record_state| +-----------------+--------------------+ +-----------------+--------------------+ 代码显示空值
最新发布
10-10
<think>我们注意到在预测结果中,`predictions` DataFrame 显示为空(没有行)。这可能是由于以下原因之一: 1. **测试数据集为空**:从Hive表 `dwd.fact_machine_learning_data_test` 读取的数据为空。 2. **特征向量组装失败**:特征向量组装过程中出现错误,导致没有生成特征向量。 3. **模型预测失败**:模型在预测时遇到问题,没有输出预测结果。 从日志中我们可以看到: ``` 随机森林模型训练完成! 预测完成,前5条结果: +-----------------+--------------------+ |machine_record_id|machine_record_state| +-----------------+--------------------+ +-----------------+--------------------+ ``` 这明确表示预测结果DataFrame是空的。因此,我们需要检查测试数据的读取和特征组装步骤。 ### 解决方案: 1. **检查测试数据是否为空**: 在代码中添加对测试数据集的计数和展示,以确认是否有数据。 2. **检查特征向量组装**: 确保特征列在测试数据中存在,且没有空值或格式问题。 3. **捕获日志中的异常**: 添加异常捕获,查看是否有错误被忽略。 下面是修改后的代码,增加了对测试数据的检查: ```scala package gs8 import org.apache.log4j.{Level, Logger} import org.apache.spark.sql.{SparkSession, DataFrame} import org.apache.spark.ml.feature.VectorAssembler import org.apache.spark.ml.Pipeline import org.apache.spark.ml.classification.RandomForestClassifier import org.apache.spark.sql.functions.col object shujuwaqu2 { def main(args: Array[String]): Unit = { val spark = SparkSession.builder() .appName("RandomForestModel") .master("local[*]") .enableHiveSupport() .getOrCreate() Logger.getLogger("org").setLevel(Level.ERROR) Logger.getLogger("org.apache.spark").setLevel(Level.WARN) // 增加Spark日志级别为WARN,减少输出 import spark.implicits._ // 1. 读取训练数据 val trainDF = spark.sql("SELECT * FROM dwd.fact_machine_learning_data WHERE machine_record_state IS NOT NULL") println(s"训练数据记录数: ${trainDF.count()}") // 2. 读取测试数据 val testDF = spark.sql("SELECT * FROM dwd.fact_machine_learning_data_test") println(s"测试数据记录数: ${testDF.count()}") testDF.show(5, truncate = false) // 显示前5行,查看数据 // 如果测试数据为空,则直接退出 if (testDF.count() == 0) { println("测试数据为空,请检查Hive表 dwd.fact_machine_learning_data_test 是否有数据!") spark.stop() return } val featureCols = Array( "machine_record_mainshaft_speed", "machine_record_mainshaft_multiplerate", "machine_record_mainshaft_load", "machine_record_feed_speed", "machine_record_feed_multiplerate", "machine_record_pmc_code", "machine_record_circle_time", "machine_record_run_time", "machine_record_effective_shaft", "machine_record_amount_process", "machine_record_use_memory", "machine_record_free_memory", "machine_record_amount_use_code", "machine_record_amount_free_code" ) // 检查测试数据中是否存在特征列 featureCols.foreach { colName => if (!testDF.columns.contains(colName)) { println(s"警告:测试数据中缺少特征列 $colName") } } val assembler = new VectorAssembler() .setInputCols(featureCols) .setOutputCol("features") // 3. 处理训练数据 val trainData = assembler.transform(trainDF) .select($"machine_record_state".alias("label"), $"features") // 4. 训练模型 val rf = new RandomForestClassifier() .setLabelCol("label") .setFeaturesCol("features") .setNumTrees(20) .setMaxDepth(8) val pipeline = new Pipeline().setStages(Array(rf)) val model = pipeline.fit(trainData) println("随机森林模型训练完成!") // 5. 处理测试数据 val testData = assembler.transform(testDF) .select($"machine_record_id", $"features") // 检查特征向量是否生成成功 testData.show(5, truncate = false) // 6. 预测 val predictions = model.transform(testData) .select($"machine_record_id", $"prediction".alias("machine_record_state")) println("预测完成,前5条结果:") predictions.show(5, truncate = false) // 7. 写入MySQL(省略连接信息) // ... 后续写入MySQL的代码 ... spark.stop() } } ``` ### 可能的问题及解决方案: 1. **测试数据为空**: - 检查Hive表 `dwd.fact_machine_learning_data_test` 是否有数据。 - 确保Hive表存在且数据正确加载。 2. **特征列缺失**: - 确保测试数据中包含所有特征列(与训练数据相同)。 - 如果特征列有缺失,需要调整特征列列表或处理测试数据。 3. **特征向量组装失败**: - 特征列中存在空值或非数值类型会导致组装失败。我们可以使用`.na.fill(0)`填充空值,并确保所有特征列都是数值类型。 4. **Hive表读取问题**: - 日志中出现错误信息: ``` 25/10/09 15:20:02 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/lib doesn't exist 25/10/09 15:20:02 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/jdbc doesn't exist ``` 这可能意味着Spark在访问Hive时缺少必要的jar包。需要将Hive的`lib`目录下的jar包(特别是`hive-exec`等)添加到Spark的classpath中。 ### 建议的解决步骤: 1. **确保Hive表中有数据**: 在Hive命令行中执行: ```sql SELECT COUNT(*) FROM dwd.fact_machine_learning_data_test; ``` 2. **检查Spark连接Hive的配置**: - 确保`hive-site.xml`在Spark的`conf`目录下。 - 确保Hive的Metastore服务正常运行。 3. **添加Hive依赖**: 在Spark应用程序中,可能需要显式添加Hive依赖。如果使用`spark-submit`,可以添加`--jars`参数指定Hive相关的jar包。 4. **处理特征列的数据类型**: 确保所有特征列都是`DoubleType`或`IntegerType`(数值类型)。如果不是,需要转换。 5. **填充测试数据中的空值**: 在对测试数据应用`VectorAssembler`之前,填充空值: ```scala val filledTestDF = testDF.na.fill(0.0, featureCols) ``` ### 修改后的特征组装步骤(包含空值填充): ```scala // 填充训练数据和测试数据中的空值 val filledTrainDF = trainDF.na.fill(0.0, featureCols) val filledTestDF = testDF.na.fill(0.0, featureCols) val assembler = new VectorAssembler() .setInputCols(featureCols) .setOutputCol("features") val trainData = assembler.transform(filledTrainDF) .select($"machine_record_state".alias("label"), $"features") // ... 训练模型 ... val testData = assembler.transform(filledTestDF) .select($"machine_record_id", $"features") ``` 请根据上述建议修改代码,并重新运行。如果问题仍然存在,请检查Hive表结构和数据。 </think>从日志和代码运行结果来看,您的预测结果为空的主要原因有以下几点: ### 问题诊断 1. **Hive依赖问题** (关键问题): ```log 25/10/09 15:20:02 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/lib doesn't exist 25/10/09 15:20:02 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/jdbc doesn't exist ``` 这些错误表明Spark无法找到Hive的依赖库路径,导致无法正确读取Hive表数据。 2. **Hive表读取问题**: ```log 25/10/09 15:20:03 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/lib doesn't exist 25/10/09 15:20:03 ERROR FileUtils: The jar file path /opt/module/hive-3.1.2/jdbc doesn't exist ``` 这些错误表明Spark无法正确加载Hive JDBC驱动和元数据所需的库文件。 3. **测试数据读取失败**: 日志显示训练数据记录数没有输出,测试数据记录数也没有输出,表明数据读取失败。 4. **Hive配置问题**: ```log log4j:WARN No appenders could be found for logger (org.apache.hadoop.hive.conf.HiveConf). log4j:WARN Please initialize the log4j system properly. ``` 表明Hive的日志配置缺失,可能影响整体运行。 ### 解决方案 以下是修复后的完整代码,包含Hive配置修复和更健壮的错误处理: ```scala package gs8 import org.apache.log4j.{Level, Logger} import org.apache.spark.sql.{SparkSession, DataFrame} import org.apache.spark.ml.feature.VectorAssembler import org.apache.spark.ml.Pipeline import org.apache.spark.ml.classification.{RandomForestClassifier, RandomForestClassificationModel} import org.apache.spark.sql.functions.{col, when} object FixedRandomForestModel { def main(args: Array[String]): Unit = { // 1. 配置日志系统 Logger.getLogger("org").setLevel(Level.ERROR) Logger.getLogger("org.apache.spark").setLevel(Level.WARN) // 2. 创建SparkSession并配置Hive支持 val spark = SparkSession.builder() .appName("FixedRandomForestModel") .master("local[*]") .config("spark.sql.warehouse.dir", "/user/hive/warehouse") .config("hive.metastore.uris", "thrift://your-hive-metastore-host:9083") // 替换为实际元存储地址 .config("spark.hadoop.hive.metastore.client.socket.timeout", "600") // 增加超时时间 .enableHiveSupport() .getOrCreate() import spark.implicits._ // 3. 添加Hive依赖路径(解决FileUtils错误) spark.sparkContext.addFile("/opt/module/hive-3.1.2/lib") // 替换为实际Hive库路径 spark.sparkContext.addJar("/opt/module/hive-3.1.2/lib/*") // 添加所有Hive JAR // 4. 读取训练数据(带错误处理) var trainDF: DataFrame = null try { trainDF = spark.sql("SELECT * FROM dwd.fact_machine_learning_data WHERE machine_record_state IS NOT NULL") println(s"[INFO] 训练数据记录数: ${trainDF.count()}") trainDF.show(5, truncate = false) } catch { case e: Exception => println(s"[ERROR] 读取训练数据失败: ${e.getMessage}") spark.stop() return } // 5. 读取测试数据(带错误处理) var testDF: DataFrame = null try { testDF = spark.sql("SELECT * FROM dwd.fact_machine_learning_data_test") println(s"[INFO] 测试数据记录数: ${testDF.count()}") testDF.show(5, truncate = false) } catch { case e: Exception => println(s"[ERROR] 读取测试数据失败: ${e.getMessage}") spark.stop() return } // 6. 特征列定义 val featureCols = Array( "machine_record_mainshaft_speed", "machine_record_mainshaft_multiplerate", "machine_record_mainshaft_load", "machine_record_feed_speed", "machine_record_feed_multiplerate", "machine_record_pmc_code", "machine_record_circle_time", "machine_record_run_time", "machine_record_effective_shaft", "machine_record_amount_process", "machine_record_use_memory", "machine_record_free_memory", "machine_record_amount_use_code", "machine_record_amount_free_code" ) // 7. 处理缺失值(填充0) val filledTrainDF = trainDF.na.fill(0.0, featureCols) val filledTestDF = testDF.na.fill(0.0, featureCols) // 8. 特征向量组装器 val assembler = new VectorAssembler() .setInputCols(featureCols) .setOutputCol("features") // 9. 配置随机森林 val rf = new RandomForestClassifier() .setLabelCol("machine_record_state") .setFeaturesCol("features") .setNumTrees(50) .setMaxDepth(10) .setFeatureSubsetStrategy("auto") .setSeed(42) .setSubsamplingRate(0.8) // 10. 管道和模型训练 val pipeline = new Pipeline().setStages(Array(assembler, rf)) val model = pipeline.fit(filledTrainDF) println(s"[SUCCESS] 随机森林模型训练完成 (${model.stages(1).asInstanceOf[RandomForestClassificationModel].getNumTrees}棵树)") // 11. 模型预测 val predictions = model.transform(filledTestDF) .select( col("machine_record_id"), when(col("prediction") === 1.0, 1.0).otherwise(0.0).alias("machine_record_state") ) println("[INFO] 预测结果示例:") predictions.show(5, truncate = false) // 12. MySQL写入配置(替换实际参数) val jdbcUrl = "jdbc:mysql://xueai:3306/shtd_industry" val connectionProperties = new java.util.Properties() connectionProperties.put("user", "root") connectionProperties.put("password", "admin") connectionProperties.put("driver", "com.mysql.jdbc.Driver") // 13. 写入MySQL(带错误处理) try { predictions.write .mode("overwrite") .jdbc(jdbcUrl, "ml_result", connectionProperties) println("[SUCCESS] 预测结果已写入MySQL表 ml_result") } catch { case e: Exception => println(s"[ERROR] MySQL写入失败: ${e.getMessage}") } // 14. 查询特定记录 val targetIds = Seq(1, 8, 20, 28, 36) println(s"[INFO] 查询目标记录: ${targetIds.mkString(", ")}") predictions.filter(col("machine_record_id").isin(targetIds: _*)).show() spark.stop() } } ``` ### 关键修复点解释: 1. **Hive配置修复**: ```scala .config("hive.metastore.uris", "thrift://your-hive-metastore-host:9083") .config("spark.hadoop.hive.metastore.client.socket.timeout", "600") ``` 添加正确的Hive元存储URI和超时设置 2. **显式添加Hive依赖**: ```scala spark.sparkContext.addFile("/opt/module/hive-3.1.2/lib") spark.sparkContext.addJar("/opt/module/hive-3.1.2/lib/*") ``` 显式添加Hive库路径(替换为实际路径) 3. **健壮的错误处理**: ```scala try { trainDF = spark.sql("SELECT ...") } catch { case e: Exception => println(s"[ERROR] ...") } ``` 添加数据读取和写入的异常处理 4. **缺失值处理**: ```scala val filledTrainDF = trainDF.na.fill(0.0, featureCols) ``` 使用0填充特征列中的缺失值 5. **详细的日志输出**: ```scala println(s"[INFO] 训练数据记录数: ${trainDF.count()}") ``` 添加关键步骤的记录数和数据预览 ### 部署前检查清单: 1. **验证Hive路径**: ```bash ls /opt/module/hive-3.1.2/lib ``` 确保路径存在且包含Hive JAR文件 2. **检查Hive元存储服务**: ```bash netstat -tuln | grep 9083 ``` 确保元存储服务在`your-hive-metastore-host`上运行 3. **测试MySQL连接**: ```bash mysql -h xueai -u root -padmin -e "USE shtd_industry; SHOW TABLES;" ``` 验证MySQL连接信息是否正确 4. **验证Hive表访问**: ```scala spark.sql("SHOW TABLES IN dwd").show() ``` 在Spark中验证是否能访问dwd数据库 ### 运行说明: 1. 替换代码中的占位符: - `your-hive-metastore-host` → 实际Hive元存储主机名 - `/opt/module/hive-3.1.2/lib` → 实际Hive库路径 - MySQL连接参数(如需修改) 2. 确保Spark提交时包含Hive依赖: ```bash spark-submit --jars /path/to/hive-jars/*.jar ... ``` 3. 如果使用IDE运行,确保添加以下依赖: ```xml <!-- Hive依赖 --> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-exec</artifactId> <version>3.1.2</version> </dependency> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-metastore</artifactId> <version>3.1.2</version> </dependency> ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值