2024年4月记录(Statement执行多次查询,maven-shade-plugin和maven-assembly-plugin,CAS验证)

本文介绍了在Java中使用JDBC执行多次查询的过程,包括数据库连接、Statement对象操作,以及MyBatis中ResultSet和Statement的释放策略。同时讨论了maven-shade-plugin和maven-assembly-plugin在打包中的应用,以及PCIe的基础知识和CAS入门实战中的票证验证技术。

1.(转) 操作系统——MBR与显存

https://www.cnblogs.com/hawkJW/p/13651701.html

2.Statement执行多次查询

在Java中,使用JDBC的Statement对象来执行SQL语句进行数据库查询时,可以通过以下步骤进行:

  1. 加载数据库驱动。

  2. 创建数据库连接。

  3. 创建Statement对象。

  4. 执行查询并处理结果。

  5. 关闭Statement对象和数据库连接。

以下是执行多次查询的示例代码:

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
 
public class MultipleQueriesExample {
    public static void main(String[] args) {
        String url = "jdbc:mysql://localhost:3306/your_database";
        String user = "your_username";
        String password = "your_password";
 
        try {
            // Step 1: Load the driver
            Class.forName("com.mysql.cj.jdbc.Driver");
 
            // Step 2: Establish a connection
            try (Connection conn = DriverManager.getConnection(url, user, password);
                 // Step 3: Create a statement
                 Statement statement = conn.createStatement()) {
 
                // Step 4: Execute queries
                String query1 = "SELECT * FROM your_table WHERE condition1";
                String query2 = "SELECT * FROM your_table WHERE condition2";
 
                // Execute query 1
                ResultSet rs1 = statement.executeQuery(query1);
                // Process results of query 1
                while (rs1.next()) {
                    // Retrieve by column name
                    int id = rs1.getInt("id");
                    String name = rs1.getString("name");
                    // ...
                }
                rs1.close();
 
                // Execute query 2
                ResultSet rs2 = statement.executeQuery(query2);
                // Process results of query 2
                while (rs2.next()) {
                    // Retrieve by column name
                    int id = rs2.getInt("id");
                    String name = rs2.getString("name");
                    // ...
                }
                rs2.close();
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

JDBC定义了接口,又定义了使用方式

 mybatis示例:

 mybatis每次请求先释放ResultSet,再释放Statement

释放Statement

org.apache.ibatis.executor.SimpleExecutor#doUpdate

org.apache.ibatis.executor.keygen.Jdbc3KeyGenerator#processBatch

释放ResultSet

3.maven-shade-plugin和maven-assembly-plugin打包插件

Java选手必看打包插件:maven-shade-plugin和maven-assembly-plugin - 墨天轮

4.原来PCIe这么简单,一定要看!

原来PCIe这么简单,一定要看! 来源:内容来自公众号「Linux阅码场」,作者:木叶 ,谢谢。硬盘是大家都很熟悉的设备,一路走来,从HDD到SSD,从... - 雪球

5.CAS 入门实战

org.jasig.cas.client.validation.AbstractCasProtocolUrlBasedTicketValidator#retrieveResponseFromServer

org.jasig.cas.client.validation.AbstractUrlBasedTicketValidator#validate

https://blog.51cto.com/wuyongyin/5321466

6.破解

Some keys for testing - jetbra.in

JETBRA.IN CHECKER | IPFS

7.内核使用内存保护机制https://www.baidu.com/s?wd=linux%E5%BA%95%E5%B1%82%E5%A6%82%E4%BD%95%E9%99%90%E5%88%B6%E7%94%A8%E6%88%B7%E6%80%81%E8%AE%BF%E9%97%AE%E5%86%85%E6%A0%B8%E6%80%81&ie=utf-8

课程名称: 大数据技术原理与应用 专业班级: 数科2304 姓 名: 仇一焯 学 号: 2023002253 指导教师: 陈泽华老师 202510月20日 实验 Spark Word Count 一 、实验内容: 基本要求: 完成Spark Word Count 二 、实验工具 虚拟机软件 VMware Workstation Pro17 操作系统 CentOS 7(64 位) ftp工具 XShell 8 Java 版本 jdk1.8.0_212 Hadoop 版本 hadoop-3.1.3 Spark版本 Spark 3.3.0 Maven 版本 Maven 3.6.3 三 、实验过程与截图 1.前置检查(必须执行) 1.1检查Spark集群状态 在 Master 节点(hadoop102)执行以下命令,确认 Master Worker 进程正常运行: - 执行节点:在 HDFS 的主节点(通常是 `hadoop102`)上执行- 命令(绝对路径): /opt/module/hadoop-3.1.3/sbin/start-dfs.sh 验证:在所有节点上执行 `jps` 命令。 - `hadoop102` 应看到 `NameNode` `DataNode`。 - `hadoop103` 应看到 `DataNode`。 - `hadoop104` 应看到 `DataNode` `SecondaryNameNode` 一键启动集群(自动启动 Master 所有 Worker): $SPARK_HOME/sbin/start-all.sh # 查看Master节点进程 Jps # 查看Worker节点进程(分别在hadoop103、hadoop104执行) Jps 解释:jps命令用于查看 Java 进程,需确保 hadoop102 有Master进程,hadoop103/104 有Worker进程,否则需重启集群:$SPARK_HOME/sbin/start-all.sh。 1.2验证HDFS服务可用性 词频统计需通过 HDFS 存储输入输出数据,执行以下命令检查 HDFS 状态: # 查看HDFS根目录 hdfs dfs -ls / # 测试HDFS写入权限 hdfs dfs -touchz /test.txt && hdfs dfs -rm /test.txt 解释:通过创建临时文件验证 HDFS 读写权限,若提示`Permission denied`需检查 Hadoop 目录权限配置。 2. 数据准备(HDFS 数据上传) 2.1 创建 HDFS 数据目录 # 创建输入目录(存放待分析文本) hdfs dfs -mkdir -p /spark-wordcount/input # 创建输出目录(后续运行前需删除,此处仅为演示) hdfs dfs -mkdir -p /spark-wordcount/output 2.2 准备本地测试文本 vi /home/atguigu/wordcount-data.txt 粘贴测试内容: Hello Spark Hello Hadoop Spark is fast Hadoop is stable Hello Spark HBase Spark Spark Spark 2.3上传文本到 HDFS hdfs dfs -put /home/atguigu/wordcount-data.txt /spark-wordcount/input/ # 验证上传结果 hdfs dfs -ls /spark-wordcount/input/ 3. 词频统计实现(Scala 语言方案) 3.1编写 Scala 代码 1. 创建代码目录(含 Maven 要求的源码结构) # 创建项目根目录 mkdir -p /home/atguigu/spark-code/scala # 进入项目根目录 cd /home/atguigu/spark-code/scala # 创建 Maven 规定的 Scala 源码目录(必须!否则 Maven 找不到代码) mkdir -p src/main/scala 2. 在正确目录下编写代码 # 注意:代码必须放在 src/main/scala 目录下 vi src/main/scala/WordCount.scala 3. 粘贴代码 // 导入Spark核心依赖(必须完整,避免编译错误) import org.apache.spark.{SparkConf, SparkContext} // 词频统计主类(单例对象,Scala中可作为入口类,类名必须与pom.xml的mainClass一致) object WordCount { def main(args: Array[String]): Unit = { // 1. 配置Spark应用(集群模式下,setMaster可通过spark-submit参数覆盖,更灵活) val conf = new SparkConf() .setAppName("Scala-WordCount") // 应用名称,Web UI可见 // 注:集群提交时建议删除setMaster,通过--master参数指定,避免硬编码 // .setMaster("spark://192.168.10.102:7077") .set("spark.executor.memory", "2g") .set("spark.cores.max", "4") // 2. 创建Spark上下文(核心入口,需确保配置正确) val sc = new SparkContext(conf) sc.setLogLevel("WARN") // 减少日志输出,聚焦结果 // 3. 处理输入输出路径(优先使用命令行参数,适配不同场景) val inputPath = if (args.length > 0) args(0) else "/spark-wordcount/input" val outputPath = if (args.length > 1) args(1) else "/spark-wordcount/output" // 4. 核心逻辑(分步注释,清晰易懂) val wordCounts = sc.textFile(inputPath) // 读取文件:每行作为一个元素 .flatMap(line => line.split("\\s+")) // 分词:按任意空白分割,扁平化为单词 .map(word => (word, 1)) // 标记计数:每个单词映射为(单词, 1) .reduceByKey(_ + _) // 聚合:按单词累加计数(简化写法,等价于(a,b)=>a+b) .sortBy(_._2, ascending = false) // 排序:按词频降序 // 5. 输出结果(控制台+HDFS,双重验证) println("=== 词频统计结果 ===") wordCounts.collect().foreach(println) // 控制台打印(仅适合小数据) wordCounts.saveAsTextFile(outputPath) // 保存到HDFS(分布式场景推荐) // 6. 释放资源(必须执行,避免内存泄漏) sc.stop() } } \- `textFile`:读取 HDFS 或本地文件,返回行级 RDD(弹性分布式数据集),是 Spark 数据处理的基础结构; `flatMap`:扁平化转换,将每行文本拆分为单个单词,解决`map`操作可能产生的嵌套数组问题; `reduceByKey`:Spark 核心聚合算子,自动按 key 分组并执行累加,比`groupByKey`更高效(减少网络传输); `collect`:动作算子,将分布式 RDD 数据拉取到 Driver 节点,仅适合小结果集查看。 4. 验证代码位置(关键检查): # 确保代码在 src/main/scala 目录下 ls -l src/main/scala/WordCount.scala # 预期输出:-rw-r--r-- 1 atguigu atguigu ... src/main/scala/WordCount.scala 3.2 打包 Scala 代码(Maven 方式) 1.创建 `pom.xml`: vi /home/atguigu/spark-code/scala/pom.xml 复制粘贴以下配置(适配 Spark 3.3.0 Scala 2.12) <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <!-- 项目基本信息(自定义,确保唯一即可) --> <groupId>com.atguigu.spark</groupId> <artifactId>spark-wordcount</artifactId> <version>1.0-SNAPSHOT</version> <!-- 版本管理(关键:Scala与Spark版本必须匹配) --> <properties> <scala.version>2.12.15</scala.version> <!-- Spark 3.3.0适配Scala 2.12.x --> <spark.version>3.3.0</spark.version> <!-- 与集群Spark版本一致 --> <maven.compiler.source>1.8</maven.compiler.source> <!-- 适配JDK 1.8 --> <maven.compiler.target>1.8</maven.compiler.target> </properties> <!-- 依赖配置(仅包含必要依赖,避免冲突) --> <dependencies> <!-- Spark核心依赖(scope=provided:集群已存在,打包不包含) --> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.12</artifactId> <!-- 后缀_2.12对应Scala版本 --> <version>${spark.version}</version> <scope>provided</scope> </dependency> </dependencies> <!-- 构建配置(核心:确保Scala代码编译并打包进JAR) --> <build> <sourceDirectory>src/main/scala</sourceDirectory> <!-- 指定Scala源码目录 --> <testSourceDirectory>src/main/test</testSourceDirectory> <plugins> <!-- 1. Scala编译插件(必须配置,否则Maven无法编译.scala文件) --> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>4.5.4</version> <executions> <execution> <id>scala-compile</id> <goals> <goal>compile</goal> <!-- 编译Scala源码 --> <goal>testCompile</goal> </goals> <configuration> <args> <arg>-dependencyfile</arg> <arg>${project.build.directory}/.scala_dependencies</arg> </args> </configuration> </execution> </executions> </plugin> <!-- 2. 打包插件(生成可执行JAR,包含主类信息) --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.3.0</version> <executions> <execution> <phase>package</phase> <goals><goal>shade</goal></goals> <configuration> <!-- 解决依赖冲突:保留最后一个依赖版本 --> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> <!-- 指定主类(必须与Scala代码中的object名称一致) --> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>WordCount</mainClass> <!-- 无包名时直接写类名 --> </transformer> </transformers> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> 编译并打包: # 进入项目根目录 cd /home/atguigu/spark-code/scala # 清理旧文件并编译(首次编译时间较长) mvn clean scala:compile # 【关键验证1】检查是否生成 .class 文件 ls -l target/classes/WordCount.class # 若输出 ".class" 文件,说明编译成功;否则检查代码语法或目录 # 打包 mvn package 3.3 提交任务到 Spark 集群 1. 上传 JAR 到 HDFS: hdfs dfs -put -f target/spark-wordcount-1.0-SNAPSHOT.jar /spark-jars/ 2. 删除旧输出目录(避免冲突): hdfs dfs -rm -r /spark-wordcount/output 3. 提交任务: $SPARK_HOME/bin/spark-submit \ --master spark://192.168.10.102:7077 \ --class WordCount \ hdfs://192.168.10.102:8020/spark-jars/spark-wordcount-1.0-SNAPSHOT.jar \ /spark-wordcount/input \ /spark-wordcount/output 4. 问题:版本过低需要升级 解决步骤:升级 Maven 到 3.3.9 及以上版本 步骤 1:查看当前 Maven 版本(确认问题) 执行以下命令,查看当前 Maven 版本: mvn -v 如果输出的`Maven home`对应的版本低于`3.3.9`(例如`3.0.5`),则需要升级。 步骤 2:下载并安装兼容的 Maven 版本 推荐安装Maven 3.6.3(兼容 3.3.9 + 要求,且稳定): 1. 进入临时目录下载安装包: cd /tmp wget http://maven.aliyun.com/nexus/content/groups/public/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.tar.gz 2. 解压到`/opt/module`目录(与你的其他软件目录统一): sudo tar -zxvf apache-maven-3.6.3-bin.tar.gz -C /opt/module/ 3. 重命名目录(方便后续配置): sudo mv /opt/module/apache-maven-3.6.3 /opt/module/maven-3.6.3 4. 配置环境变量(替换旧版本 Maven):编辑`/etc/profile`文件: sudo vi /etc/profile 添加以下内容(分别在hadoop102.103.104上进行配置)(确保覆盖系统默认的 Maven 路径): export MAVEN_HOME=/opt/module/maven-3.6.3 export PATH=$MAVEN_HOME/bin:$PATH 生效配置: source /etc/profile 步骤 3:验证 Maven 版本是否升级成功 再次执行: mvn -v 若输出类似以下内容,说明升级成功: ```plaintext Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f) Maven home: /opt/module/maven-3.6.3 Java version: 1.8.0_212, vendor: Oracle Corporation, runtime: /opt/module/jdk1.8.0_212/jre 根据以上内容,生成流程图,一共四部分分别是总体介绍,详细操作,问题解决总结与讨论
最新发布
10-20
Dependency 'org.apache.flink:flink-connector-redis_2.12:1.1.5' not found <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.example</groupId> <artifactId>scala_test</artifactId> <version>1.0-SNAPSHOT</version> <!-- 仓库配置(保留原配置,确保依赖下载速度) --> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>aliyun-public</id> <url>https://maven.aliyun.com/repository/public</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>apache-hbase</id> <url>https://repository.apache.org/content/repositories/releases/</url> </repository> <!-- 新增:Cloudera仓库(解决Flink-Redis连接器下载问题) --> <repository> <id>cloudera-repos</id> <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>aliyun-spring</id> <url>https://maven.aliyun.com/repository/spring</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <!-- 新增:Maven中央仓库镜像(双重保险) --> <repository> <id>maven-central-mirror</id> <url>https://repo.maven.apache.org/maven2/</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>aliyun-plugin</id> <url>https://maven.aliyun.com/repository/public</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </pluginRepository> </pluginRepositories> <!-- 版本属性(新增Redis/Flink-HBase/Gson版本,确保兼容性) --> <properties> <scala.version>2.12.12</scala.version> <scala.binary.version>2.12</scala.binary.version> <flink.version>1.14.0</flink.version> <hadoop.version>3.1.3</hadoop.version> <hbase.version>2.4.9</hbase.version> <slf4j.version>1.7.30</slf4j.version> <java.version>1.8</java.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <shade-plugin.version>3.1.1</shade-plugin.version> <scala-maven-plugin.version>3.2.2</scala-maven-plugin.version> <kafka-clients.version>2.8.1</kafka-clients.version> <commons-collections4.version>4.4</commons-collections4.version> <!-- 新增1:Flink-Redis连接器版本(适配Flink 1.14.0) --> <flink-redis.version>1.1.5</flink-redis.version> <!-- 新增2:Flink-HBase连接器版本(适配HBase 2.x + Flink 1.14.0) --> <flink-hbase.version>1.14.0</flink-hbase.version> <!-- 新增3:Gson版本(代码中解析JSON用,与Flink兼容) --> <gson.version>2.8.9</gson.version> <!-- 新增4:Redis客户端版本(适配Flink-Redis) --> <jedis.version>2.9.3</jedis.version> </properties> <dependencies> <!-- 1. Scala 基础依赖 --> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.version}</version> <scope>compile</scope> </dependency> <!-- 2. Flink 核心依赖(集群提供,设为provided) --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-clients_${scala.binary.version}</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-core</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> <!-- 3. Flink 连接器基础包 --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-base</artifactId> <version>${flink.version}</version> <scope>compile</scope> </dependency> <!-- 4. Flink-Kafka 连接器(排除自带kafka-clients) --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId> <version>${flink.version}</version> <scope>compile</scope> <exclusions> <exclusion> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> </exclusion> <exclusion> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> </exclusion> </exclusions> </dependency> <!-- 5. Kafka 客户端 --> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>${kafka-clients.version}</version> <scope>provided</scope> </dependency> <!-- 6. commons-collections4(解决版本冲突) --> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-collections4</artifactId> <version>${commons-collections4.version}</version> <scope>compile</scope> </dependency> <!-- 7. Jackson JSON 解析 --> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.12.7</version> <scope>compile</scope> </dependency> <!-- 8. 日志依赖 --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-slf4j</artifactId> <version>2.14.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.lz4</groupId> <artifactId>lz4-java</artifactId> <version>1.8.0</version> <scope>compile</scope> </dependency> <!-- 9. Hadoop 依赖 --> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>${hadoop.version}</version> <scope>provided</scope> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> </exclusion> </exclusions> </dependency> <!-- 10. HBase 依赖 --> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-client</artifactId> <version>${hbase.version}</version> <scope>compile</scope> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </exclusion> <exclusion> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-common</artifactId> <version>${hbase.version}</version> <scope>compile</scope> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> </exclusion> </exclusions> </dependency> <!-- 新增1:Flink-HBase 连接器(UV/PV写入HBase关键依赖) --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-hbase-2.2_${scala.binary.version}</artifactId> <version>${flink-hbase.version}</version> <scope>compile</scope> <exclusions> <exclusion> <groupId>org.apache.hbase</groupId> <artifactId>hbase-client</artifactId> </exclusion> <exclusion> <groupId>org.apache.hbase</groupId> <artifactId>hbase-common</artifactId> </exclusion> </exclusions> </dependency> <!-- 新增2:Flink-Redis 连接器(GMV写入Redis关键依赖) --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-redis_${scala.binary.version}</artifactId> <version>${flink-redis.version}</version> <scope>compile</scope> </dependency> <!-- 新增3:Redis客户端(适配Flink-Redis) --> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>${jedis.version}</version> <scope>compile</scope> </dependency> <!-- 新增4:Gson(代码中解析JSON数据,样题建议) --> <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>${gson.version}</version> <scope>compile</scope> </dependency> </dependencies> <build> <sourceDirectory>src/main/scala</sourceDirectory> <testSourceDirectory>src/test/scala</testSourceDirectory> <plugins> <!-- Scala 编译插件 --> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>${scala-maven-plugin.version}</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> <configuration> <args> <arg>-nobootcp</arg> <arg>-target:jvm-1.8</arg> </args> </configuration> </execution> </executions> </plugin> <!-- Java 编译插件 --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <source>8</source> <target>8</target> <encoding>UTF-8</encoding> </configuration> </plugin> <!-- 打包插件:Shade(修正主类配置) --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>${shade-plugin.version}</version> <executions> <execution> <phase>package</phase> <goals><goal>shade</goal></goals> <configuration> <artifactSet> <excludes> <exclude>org.apache.flink:*</exclude> <exclude>org.apache.hadoop:*</exclude> <exclude>org.apache.kafka:kafka-clients</exclude> </excludes> </artifactSet> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/versions/**</exclude> <exclude>module-info.class</exclude> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> <transformers> <!-- 修正:主类需与实时指标计算代码的主类一致(后续替换为实际主类) --> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <!-- 示例:若UV/PV主类是 flink.UVPVMain,GMV主类是 flink.GMVMain,需分别打包或指定一个入口 --> <mainClass>flink.UVPVMain</mainClass> </transformer> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/> </transformers> </configuration> </execution> </executions> </plugin> <!-- 源码目录识别 --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <version>3.2.0</version> <executions> <execution> <id>add-source</id> <phase>generate-sources</phase> <goals><goal>add-source</goal></goals> <configuration> <sources><source>src/main/scala</source></sources> </configuration> </execution> </executions> </plugin> </plugins> </build> </project>这是pom文件报错
09-26
内含详细的帮助文件(英文),解决各种授权出问题的情况 没分的请到网盘下载: http://www.400gb.com/file/54873134 PCS7 v8.0, TIA Poprtal, WinCC and key9999 Problems with new ALM v5 Simatic IT Business Objects Enterprise signed keys Official FAQ FastCopy keys v3.0 1843 keys v2013.12.25 add some info for Simatic Net v12 //MD5: 0ade8808e789e6e704e66b0877c1c04f *Sim_EKB_Install_2013_12_25.exe v2013.12.10 add some info for Simocode v12 // MD5: 10a0c74548ba89b80e98f2664289ba79 v2013.12.02 add some info for PDM v8.1 v2013.12.01 add some info for Softnet Security Client v4.0 v2013.05.26 add some info for WinCC v7.2 v2013.03.08 fix 1 key for TIA Portal v12 v2013.03.03 add TIA Portal v12, S7-Technology, fix Unicode bug for some new keys folder v2013.03.01_test add TIA Portal v12 with some bug for all Unicode version from v2012.10.12 v2013.01.21 fix bug with PID v11.0 keys, add some info for D7-SYS v8.0 v2012.10.12 add Chinese language (possible Cyryllic chars in the European interfaces :) ASCII > Unicode v2012.09.29 add some info for Easy Motion v11.0, SinuTrain 4.4 v2012.07.29 add some info for PID v11.0 *but with error :( v2012.07.23 add some info for Simotion Scout v4.3 v2012.07.19 add some info for Telecontrol Server Basic v2.0 and fix comment bug v2012.07.18 add some info for PDM v6.1. v2012.03.08 add some info for PCS7 v8.0. v2012.02.26 add some info forWinCC Flexible 2008 SP3, Simatic Net v8.1. Fixed bug of euristic engine. v2012.01.26 add some info for Simatic Net v8.1 14/05/2011 add some info for TIA Portal V11 15/02/2011 This internal version for testing add some keys 11/10/2010 add some info for.... 10/10/2010 add one info for "SIK/ SIMATIC S5 PMC IE V7.0" ("A9S5IE70") 09/09/2010 add some info for WinCC DowntimeMonitor, WinAC, fix bug for comp and Virtual Machine with 1 HardDisk partition 09/05/2010 add keys for WinCC Flexible 2008 SP2 options v1.3, Sinaut, fixed one ALM v5 bug 20/04/2010 add some keys Safety Matrix, PCS7 powerrate, PCS7 CFC AS RT SIFPZASRPX9999A 20/03/2010 add some keys, "opt
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值