Flink用RedisSink将数据存到Redis

  1. 用FlinkJedisPoolConfig设置redis的连接信息
  2. 数据流.addSink()里面new 一个 RedisSink
  3. 将redis配置和自定义类继承RedisMapper放入RedisSink
  4. 重写自定义类
    依赖
<dependencies>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_2.11</artifactId>
            <version>1.10.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_2.11</artifactId>
            <version>1.10.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka-0.11_2.11</artifactId>
            <version>1.10.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.bahir</groupId>
            <artifactId>flink-connector-redis_2.11</artifactId>
            <version>1.0</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>8.0.25</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-statebackend-rocksdb_2.12</artifactId>
            <version>1.10.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner_2.11</artifactId>
            <version>1.10.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner-blink_2.11</artifactId>
            <version>1.10.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-scala-bridge_2.11</artifactId>
            <version>1.10.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-csv</artifactId>
            <version>1.10.1</version>
        </dependency>

    </dependencies>
    <build>
        <plugins> <!-- 该插件用于将 Scala 代码编译成 class 文件 -->
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.4.6</version>
                <executions>
                    <execution> <!-- 声明绑定到 maven 的 compile 阶段 -->
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.0.0</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

代码

// 定义FlinkJedisPoolConfig设置redis的连接信息
val conf = new FlinkJedisPoolConfig.Builder()
   .setHost("localhost")
   .setPort(6379)
   .build()
   
 // addSink里传一个RedisSink
 dataStream.addSink(new RedisSink[插入的数据类型](conf, new MyRedisMapper))
    
// 自定义类继承RedisMapper
class MyRedisMapper extends RedisMapper[插入的数据类型]{

  // 定义保存数据的命令描述为“HSET” 和 哈希表名
  override def getCommandDescription: RedisCommandDescription = 
  new RedisCommandDescription(RedisCommand.HSET, "哈希表名")
    
  // 指定key值
  override def getKeyFromData(t: 插入的数据类型): String = t.属性

  // 指定value值
  override def getValueFromData(t: 插入的数据类型): String = t.属性
}
Flink可以使用FlinkKafkaConsumer来读取Kafka数据,并使用Flink的状态后端来保存偏移量。而要将偏移量存入Redis,可以在Flink中使用Redis作为状态后端,将偏移量存储在Redis中。 具体实现步骤如下: 1. 引入相关依赖 在Flink项目中引入以下依赖: ``` <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-statebackend-redis</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-redis_${scala.binary.version}</artifactId> <version>${flink.version}</version> </dependency> ``` 2. 配置Redis状态后端 在Flink程序中配置Redis作为状态后端,可以参考以下代码: ``` import org.apache.flink.contrib.streaming.state.RocksDBStateBackend; import org.apache.flink.contrib.streaming.state.RocksDBStateBackend.PriorityQueueStateType; import org.apache.flink.runtime.state.StateBackend; import org.apache.flink.runtime.state.memory.MemoryStateBackend; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.connectors.redis.RedisConfig; import org.apache.flink.streaming.connectors.redis.RedisSink; import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig; public class RedisStateBackendExample { public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); // 配置Redis连接池 FlinkJedisPoolConfig jedisPoolConfig = new FlinkJedisPoolConfig.Builder() .setHost("localhost") .setPort(6379) .build(); // 配置Redis状态后端 StateBackend stateBackend = new RocksDBStateBackend("file:///tmp/checkpoints", true); env.setStateBackend(stateBackend); // 添加RedisSink RedisConfig redisConfig = new RedisConfig.Builder() .setHost("localhost") .setPort(6379) .build(); RedisSink<String> redisSink = new RedisSink<>(redisConfig, new RedisSinkFunction<String>("my-redis-key")); env.addSink(redisSink); // 执行任务 env.execute("Redis State Backend Example"); } } ``` 注意:这里使用了RocksDBStateBackend作为状态后端,同时也添加了一个RedisSink。 3. 配置Kafka消费者 在Flink程序中使用FlinkKafkaConsumer来读取Kafka数据,并使用状态后端保存偏移量,可以参考以下代码: ``` import org.apache.flink.api.common.functions.MapFunction; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer; import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.OffsetCommitMode; import org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema; import org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchemaWrapper; import org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema; import org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchemaWrapper; import org.apache.flink.streaming.connectors.kafka.KafkaTopicPartition; import org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState; import org.apache.flink.streaming.util.serialization.SimpleStringSchema; import java.util.Properties; public class KafkaToRedisExample { public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); // 配置Kafka消费者 Properties properties = new Properties(); properties.setProperty("bootstrap.servers", "localhost:9092"); properties.setProperty("group.id", "my-group"); FlinkKafkaConsumer<String> kafkaConsumer = new FlinkKafkaConsumer<>("my-topic", new SimpleStringSchema(), properties); // 配置Kafka反序列化器 KafkaDeserializationSchema<String> kafkaDeserializationSchema = new KafkaDeserializationSchemaWrapper<>(new SimpleStringSchema()); kafkaConsumer.setStartFromEarliest(); kafkaConsumer.setCommitOffsetsOnCheckpoints(true); kafkaConsumer.setCommitOffsetOnCheckpoint(true); kafkaConsumer.setCommitMode(OffsetCommitMode.ON_CHECKPOINTS); kafkaConsumer.setDeserializationSchema(kafkaDeserializationSchema); // 读取Kafka数据 DataStream<String> stream = env.addSource(kafkaConsumer); // 对数据进行处理 stream.map(new MapFunction<String, String>() { @Override public String map(String value) throws Exception { return value.toUpperCase(); } }).print(); // 执行任务 env.execute("Kafka to Redis Example"); } } ``` 注意:这里使用了FlinkKafkaConsumer来读取Kafka数据,并配置了反序列化器和偏移量相关的参数。 4. 将偏移量存入Redis 要将偏移量存入Redis,可以在程序中使用RedisSink来实现。这里的RedisSinkFunction可以将偏移量存储在Redis中。 ``` import org.apache.flink.streaming.connectors.redis.RedisCommands; import org.apache.flink.streaming.connectors.redis.RedisSinkFunction; public class RedisSinkFunction<T> implements RedisSinkFunction<T> { private final String redisKey; public RedisSinkFunction(String redisKey) { this.redisKey = redisKey; } @Override public void invoke(T value, Context context, RedisCommands redisCommands) throws Exception { // 将偏移量存储在Redis中 for (KafkaTopicPartitionState<T> partition : context.getKafkaTopicPartitionStates()) { KafkaTopicPartition topicPartition = partition.getKafkaTopicPartition(); String key = redisKey + "-" + topicPartition.getTopic() + "-" + topicPartition.getPartition(); long offset = partition.getOffset(); redisCommands.set(key, String.valueOf(offset)); } } } ``` 注意:这里的RedisSinkFunction使用了Flink的上下文信息Context,可以获取到KafkaTopicPartitionState,从而可以获取到偏移量。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值