InfluxDbTemplate 使用文档
目录
简介
InfluxDbTemplate 是一个 InfluxDB 通用模板类,基于 flux-dsl 构建,提供:
✅ 单条/批量写入数据 ✅ 基础查询和可扩展查询 ✅ 根据 Tag 查询(单值/多值,自动优化) ✅ 多 Tag 多值组合查询 ✅ 超过阈值自动并行查询(阈值可配置) ✅ 自动行转列(pivot)
架构设计
核心类关系图
┌─────────────────────────────────────────────────────────────────┐
│ InfluxDbAutoConfiguration │
│ @Configuration │
│ @ConditionalOnProperty(prefix = "influxdb", name = "url") │
│ │
│ 负责:自动配置 InfluxDB 相关 Bean │
│ 条件:只有配置了 influxdb.url 才会生效 │
│ │
│ 自动注入的 Bean: │
│ - InfluxDBClient (InfluxDB 客户端) │
│ - WriteApi (写入 API) │
│ - QueryApi (查询 API) │
│ - DeleteApi (删除 API) │
│ - ThreadPoolTaskExecutor (并行查询线程池) │
└─────────────────────────────────────────────────────────────────┘
│
│ 自动注入
▼
┌─────────────────────────────────────────────────────────────────┐
│ InfluxDbProperties │
│ @ConfigurationProperties(prefix = "influxdb") │
│ │
│ 配置项: │
│ - url InfluxDB 地址 │
│ - token 认证 Token │
│ - org 组织名称 │
│ - bucket Bucket 名称 │
│ - batchSize 并行查询批次大小(默认 500) │
│ - readTimeout 读取超时 │
│ - writeTimeout 写入超时 │
└─────────────────────────────────────────────────────────────────┘
│
│ 依赖注入
▼
┌─────────────────────────────────────────────────────────────────┐
│ InfluxDbTemplate<Entity> (抽象类) │
│ │
│ 功能: │
│ - add / addBatch 写入数据 │
│ - queryByFlux 基础查询 │
│ - queryByTag 单 Tag 查询 │
│ - queryByTagValues 多值查询(自动优化) │
│ - queryByMultiTagValues 多 Tag 组合查询 │
│ - delete 删除数据 │
│ │
│ 特性: │
│ - 依赖通过 @Autowired 自动注入(子类无需关心) │
│ - 泛型类型通过反射自动获取 │
│ - 超过 batchSize 自动并行查询 │
└─────────────────────────────────────────────────────────────────┘
│
│ 继承
▼
┌─────────────────────────────────────────────────────────────────┐
│ YourRepository (业务类) │
│ @Repository │
│ extends InfluxDbTemplate<YourEntity> │
│ │
│ 构造方法只需传入 measurement 名称: │
│ public YourRepository() { │
│ super("your_measurement_name"); │
│ } │
│ │
│ 可添加业务方法: │
│ - queryByIdList(...) │
│ - queryByIdListSmart(...) // 智能查询 │
│ - ... │
└─────────────────────────────────────────────────────────────────┘
三个核心类说明
类名 职责 位置 InfluxDbAutoConfiguration自动配置,注册 InfluxDB 相关 Bean starter 包 InfluxDbProperties配置属性映射(application.yml) starter 包 InfluxDbTemplate<Entity>抽象模板类,提供增删查方法 starter 包
使用方式
@Repository
public class DeviceHistoryRepository extends InfluxDbTemplate < DeviceHistory > {
public DeviceHistoryRepository ( ) {
super ( DeviceHistory . MEASUREMENT) ;
}
public List < DeviceHistory > queryByDeviceId ( Instant start, Instant end, String deviceId) {
return this . queryByTag ( start, end, "deviceId" , deviceId) ;
}
}
无需手动注入任何依赖,InfluxDbTemplate 会自动通过 @Autowired 注入所需的:
WriteApi QueryApi DeleteApi InfluxDbProperties ThreadPoolTaskExecutor
快速开始
1. 添加依赖
< dependency>
< groupId> com.tigeriot</ groupId>
< artifactId> influxdb-spring-boot3-starter</ artifactId>
< version> 1.0.2</ version>
</ dependency>
2. 配置 application.yml
influxdb :
url : http: //localhost: 8086
token : your- token
org : your- org
bucket : your- bucket
precision : ms
read-timeout : 30s
write-timeout : 30s
connect-timeout : 10s
batch-size : 500
配置项说明
配置项 说明 默认值 urlInfluxDB 地址 必填 token认证 Token 必填 org组织名称 必填 bucketBucket 名称 必填 precision时间精度(s/ms/us/ns) s read-timeout读取超时 10s write-timeout写入超时 10s connect-timeout连接超时 10s batch-size并行查询批次大小,超过此数量自动分批并行 500
3. 创建实体类
@Measurement ( name = "device_history" )
public class DeviceHistory {
public static final String MEASUREMENT = "device_history" ;
@Column ( timestamp = true )
private Instant time;
@Column ( tag = true )
private String deviceId;
@Column ( tag = true )
private String sensorType;
@Column
private Double temperature;
@Column
private Double humidity;
}
4. 创建 Repository
@Repository
public class DeviceHistoryRepository extends InfluxDbTemplate < DeviceHistory > {
public DeviceHistoryRepository ( ) {
super ( DeviceHistory . MEASUREMENT) ;
}
}
5. 使用
@Service
public class DeviceService {
@Autowired
private DeviceHistoryRepository repository;
public void save ( DeviceHistory history) {
repository. add ( history) ;
}
public List < DeviceHistory > query ( Instant start, Instant end) {
return repository. queryByFlux ( start, end) ;
}
}
查询优化原理
Flux 查询顺序最优解
from(bucket) → range(time) → filter(measurement AND tags) → [聚合] → pivot
顺序 操作 说明 1 from(bucket)指定数据源 2 range(start, end)必须紧跟 from ,时间过滤优先3 filter(measurement AND tags)measurement 和 tag 放同一个 filter 4 聚合操作(可选) aggregateWindow、mean、sum 等 5 pivot行转列,转成实体类
为什么不能用 contains()?
// ❌ 错误:contains() 无法利用索引,全表扫描
|> filter(fn: (r) => contains(value: r["id"], set:["id1", "id2"]))
// ✅ 正确:OR 组合 equal,每个条件都能利用索引
|> filter(fn: (r) => r["id"] == "id1" or r["id"] == "id2")
多 Tag 查询顺序会影响性能吗?
答案:不会!InfluxDB 没有 MySQL 那样的"最左前缀原则"。
InfluxDB vs MySQL 索引对比
数据库 索引类型 最左前缀原则 Tag 顺序影响 MySQL 联合索引(B+树) ✅ 有 顺序很重要 InfluxDB 每个 Tag 独立倒排索引 ❌ 没有 顺序无所谓
原理说明
MySQL 联合索引 (a, b, c):
├── 必须先查 a,才能用 b
├── 查 b 不查 a → 索引失效
└── 最左前缀原则
InfluxDB Tag 索引:
├── Tag a → 独立倒排索引
├── Tag b → 独立倒排索引
├── Tag c → 独立倒排索引
└── 每个 Tag 都可以独立使用索引,顺序无关
示例
Map < String , Collection < String > > tagValuesMap = new LinkedHashMap < > ( ) ;
tagValuesMap. put ( "deviceId" , Set . of ( "device_001" , "device_002" ) ) ;
tagValuesMap. put ( "sensorType" , Set . of ( "temperature" ) ) ;
tagValuesMap. put ( "sensorType" , Set . of ( "temperature" ) ) ;
tagValuesMap. put ( "deviceId" , Set . of ( "device_001" , "device_002" ) ) ;
总结
问题 答案 Tag 顺序影响索引吗? ❌ 不影响,每个 Tag 独立索引 有最左前缀原则吗? ❌ 没有,和 MySQL 不同 需要关注顺序吗? ❌ 不需要,InfluxDB 自动优化 什么影响性能? ✅ 时间范围、Tag 值数量、是否使用聚合
基础操作
写入单条数据
DeviceHistory history = new DeviceHistory ( ) ;
history. setDeviceId ( "device_001" ) ;
history. setTemperature ( 25.5 ) ;
history. setTime ( Instant . now ( ) ) ;
repository. add ( history) ;
批量写入
List < DeviceHistory > historyList = new ArrayList < > ( ) ;
repository. addBatch ( historyList) ;
查询操作
1. 基础时间范围查询
Instant start = Instant . now ( ) . minus ( 24 , ChronoUnit . HOURS) ;
Instant end = Instant . now ( ) ;
List < DeviceHistory > list = repository. queryByFlux ( start, end) ;
2. 单 Tag 单值查询
List < DeviceHistory > list = repository. queryByTag ( start, end, "deviceId" , "device_001" ) ;
3. 单 Tag 多值查询(自动优化)
Set < String > deviceIds = Set . of ( "device_001" , "device_002" , "device_003" ) ;
List < DeviceHistory > list = repository. queryByTagValues ( start, end, "deviceId" , deviceIds) ;
4. 多 Tag 多值组合查询
Map < String , Collection < String > > tagValuesMap = new LinkedHashMap < > ( ) ;
tagValuesMap. put ( "deviceId" , Set . of ( "device_001" , "device_002" ) ) ;
tagValuesMap. put ( "sensorType" , Set . of ( "temperature" , "humidity" ) ) ;
List < DeviceHistory > list = repository. queryByMultiTagValues ( start, end, tagValuesMap) ;
5. 自定义 Flux 查询
Flux flux = Flux . from ( "your-bucket" )
. range ( start, end)
. filter ( Restrictions . measurement ( ) . equal ( "device_history" ) )
. filter ( Restrictions . tag ( "deviceId" ) . equal ( "device_001" ) )
. pivot ( new String [ ] { "_time" } , new String [ ] { "_field" } , "_value" ) ;
List < DeviceHistory > list = repository. queryByFlux ( flux) ;
聚合查询
⭐ 聚合分组规则(重要)
InfluxDB 聚合默认按 Series 分组,Series = Measurement + 所有 Tag 的组合
这意味着:不管你查询时传不传 Tag 条件,聚合结果都会按每个 Tag 组合独立计算!
假设实体类定义了两个 Tag:id 和 type
数据存储方式(每个 Tag 组合是一个独立的 Series):
┌─────────────────────────────────────────────────┐
│ Series 1: id=001, type=A → [数据点...] │
│ Series 2: id=001, type=B → [数据点...] │
│ Series 3: id=002, type=A → [数据点...] │
│ Series 4: id=002, type=B → [数据点...] │
└─────────────────────────────────────────────────┘
执行 aggregateWindow(20分钟, mean) 后:
→ 每个 Series 独立计算平均值,不会混在一起!
场景 聚合分组方式 不传 Tag 条件 按所有 Tag 组合分组(查全部设备,每个设备独立计算) 传 Tag 条件 按所有 Tag 组合分组(只是数据量变少了) 想全部混在一起算 需要显式 group() 取消分组
repository. queryByFlux ( start, end,
flux -> flux. aggregateWindow ( 20L , ChronoUnit . MINUTES, "mean" )
) ;
repository. queryByTag ( start, end, "id" , "device_001" ,
flux -> flux. aggregateWindow ( 20L , ChronoUnit . MINUTES, "mean" )
) ;
repository. queryByFlux ( start, end,
flux -> flux. group ( ) . aggregateWindow ( 20L , ChronoUnit . MINUTES, "mean" )
) ;
常用聚合函数
函数 说明 示例 mean平均值 温度平均值 sum求和 总流量 count计数 记录数 min最小值 最低温度 max最大值 最高温度 first第一个值 开始值 last最后一个值 结束值 median中位数 中间值 stddev标准差 数据波动 spread极差(max-min) 变化范围
1. 时间窗口聚合 - aggregateWindow
List < DeviceHistory > list = repository. queryByFlux ( start, end,
flux -> flux. aggregateWindow ( 10L , ChronoUnit . MINUTES, "mean" )
) ;
List < DeviceHistory > list = repository. queryByFlux ( start, end,
flux -> flux. aggregateWindow ( 1L , ChronoUnit . HOURS, "max" )
) ;
List < DeviceHistory > list = repository. queryByFlux ( start, end,
flux -> flux. aggregateWindow ( 1L , ChronoUnit . DAYS, "count" )
) ;
2. 带 Tag 条件的聚合查询
Set < String > deviceIds = Set . of ( "device_001" , "device_002" ) ;
List < DeviceHistory > list = repository. queryByTagValues ( start, end, "deviceId" , deviceIds,
flux -> flux. aggregateWindow ( 20L , ChronoUnit . MINUTES, "mean" )
) ;
3. 多 Tag + 聚合
Map < String , Collection < String > > tagValuesMap = new LinkedHashMap < > ( ) ;
tagValuesMap. put ( "deviceId" , Set . of ( "device_001" , "device_002" ) ) ;
tagValuesMap. put ( "sensorType" , Set . of ( "temperature" ) ) ;
List < DeviceHistory > list = repository. queryByMultiTagValues ( start, end, tagValuesMap,
flux -> flux. aggregateWindow ( 30L , ChronoUnit . MINUTES, "mean" )
) ;
4. 降采样查询(减少数据量)
List < DeviceHistory > list = repository. queryByFlux ( start, end,
flux -> flux. aggregateWindow ( 1L , ChronoUnit . HOURS, "mean" )
) ;
5. 获取最新数据
List < DeviceHistory > list = repository. queryByFlux ( start, end,
flux -> flux. last ( )
) ;
6. 获取最早数据
List < DeviceHistory > list = repository. queryByFlux ( start, end,
flux -> flux. first ( )
) ;
7. 排序和限制
List < DeviceHistory > list = repository. queryByFlux ( start, end,
flux -> flux. sort ( new String [ ] { "_time" } , true ) . limit ( 100 )
) ;
8. 分组说明
aggregateWindow 默认按每个设备(Tag)分组 ,通常不需要显式 groupBy。
flux -> flux. aggregateWindow ( 1L , ChronoUnit . HOURS, "mean" )
flux -> flux. group ( ) . aggregateWindow ( 1L , ChronoUnit . HOURS, "mean" )
flux -> flux. groupBy ( new String [ ] { "deviceId" } ) . aggregateWindow ( 1L , ChronoUnit . HOURS, "mean" )
删除操作
删除时间范围内的数据
OffsetDateTime start = OffsetDateTime . now ( ) . minusDays ( 7 ) ;
OffsetDateTime end = OffsetDateTime . now ( ) . minusDays ( 1 ) ;
repository. delete ( start, end) ;
带条件删除
repository. delete ( start, end, "_measurement='device_history' AND deviceId='device_001'" ) ;
完整参数删除
repository. delete ( start, end, "bucket-name" , "org-name" ,
"_measurement='device_history' AND deviceId='device_001'" ) ;
高级用法
1. 并行查询原理
当查询的 Tag 值超过 500 个时,自动启用并行查询:
原始请求: 2000 个 ID
↓
分批: [500个] [500个] [500个] [500个]
↓
并行查询(使用线程池)
↓
合并结果
2. 批次大小说明
默认批次大小为 500,这是一个平衡值:
减少分批次数,降低网络开销 避免单次查询的 Flux 语句过长
3. 复杂条件组合
List < DeviceHistory > list = repository. queryByFlux ( start, end, flux -> {
return flux
. filter ( Restrictions . and (
Restrictions . tag ( "deviceId" ) . equal ( "device_001" ) ,
Restrictions . column ( "temperature" ) . greater ( 20.0 )
) )
. aggregateWindow ( 10L , ChronoUnit . MINUTES, "mean" ) ;
} ) ;
4. 数据下采样策略
时间范围 建议窗口 说明 1小时内 1分钟 保持较高精度 1天内 5-10分钟 适中精度 1周内 30分钟-1小时 降低数据量 1月内 1-6小时 显示趋势 1年内 1天 长期趋势
public List < DeviceHistory > queryWithAutoWindow ( Instant start, Instant end, String deviceId) {
long hours = Duration . between ( start, end) . toHours ( ) ;
long windowMinutes;
if ( hours <= 1 ) {
windowMinutes = 1 ;
} else if ( hours <= 24 ) {
windowMinutes = 10 ;
} else if ( hours <= 168 ) {
windowMinutes = 60 ;
} else {
windowMinutes = 360 ;
}
return repository. queryByTag ( start, end, "deviceId" , deviceId,
flux -> flux. aggregateWindow ( windowMinutes, ChronoUnit . MINUTES, "mean" )
) ;
}
常见问题
Q1: 查询很慢怎么办?
检查是否使用了 contains() :改用 queryByTagValues 方法缩小时间范围 :range 是最重要的过滤条件使用聚合降采样 :减少返回数据量确认 Tag 是否正确 :Tag 有索引,Field 没有
Q2: 查询结果为空?
检查时间范围是否正确(注意时区) 检查 measurement 名称是否匹配 检查 Tag 值是否存在 在 InfluxDB UI 中直接执行 Flux 验证
Q3: 如何查看生成的 Flux 语句?
Flux flux = Flux . from ( "bucket" )
. range ( start, end)
. filter ( Restrictions . measurement ( ) . equal ( "device_history" ) ) ;
System . out. println ( flux. toString ( ) ) ;
Q4: 大量 ID 查询超时?
确保使用 queryByTagValues(自动并行) 增加线程池大小 缩小时间范围 使用聚合降采样
Q5: 时区问题?
InfluxDB 内部使用 UTC 时间,查询时注意:
Instant start = Instant . now ( ) . minus ( 24 , ChronoUnit . HOURS) ;
LocalDateTime localTime = LocalDateTime . of ( 2024 , 1 , 1 , 0 , 0 ) ;
Instant instant = localTime. atZone ( ZoneId . of ( "Asia/Shanghai" ) ) . toInstant ( ) ;
API 参考
方法 参数 返回值 说明 add(entity)实体对象 void 写入单条 addBatch(list)实体列表 void 批量写入 queryByFlux(start, end)时间范围 List 基础查询 queryByFlux(start, end, condition)时间+条件 List 可扩展查询 queryByTag(start, end, tag, value)单Tag单值 List Tag精确查询 queryByTagValues(start, end, tag, values)单Tag多值 List 自动优化查询 queryByTagValues(start, end, tag, values, condition)单Tag多值+条件 List 带聚合的多值查询 queryByMultiTagValues(start, end, map)多Tag多值 List 组合查询 queryByMultiTagValues(start, end, map, condition)多Tag多值+条件 List 带聚合的组合查询 delete(start, stop)时间范围 void 删除数据 delete(start, stop, predicate)时间+条件 void 条件删除
版本历史
版本 日期 更新内容 1.0.2 2024-12 添加并行查询、多Tag支持、查询优化 1.0.1 2024-11 基础功能 1.0.0 2024-10 初始版本
附录:核心类源码
1. InfluxDbAutoConfiguration(自动配置类)
package com. tigeriot. influxdbspringboot3starter. influxdbAutoConfiguration ;
import com. influxdb. annotations. Measurement ;
import com. influxdb. client. * ;
import com. influxdb. client. domain. WritePrecision ;
import com. tigeriot. globalcommonservice. global. threadPool. DefaultThreadPoolTaskExecutorBuilder ;
import io. reactivex. rxjava3. core. BackpressureOverflowStrategy ;
import okhttp3. OkHttpClient ;
import okhttp3. Protocol ;
import org. springframework. boot. autoconfigure. condition. ConditionalOnProperty ;
import org. springframework. boot. context. properties. EnableConfigurationProperties ;
import org. springframework. context. annotation. Bean ;
import org. springframework. context. annotation. Configuration ;
import org. springframework. scheduling. concurrent. ThreadPoolTaskExecutor ;
import java. util. Collections ;
@ConditionalOnProperty ( prefix = "influxdb" , name = "url" )
@EnableConfigurationProperties ( InfluxDbProperties . class )
@Configuration
public class InfluxDbAutoConfiguration {
private final InfluxDbProperties properties;
public InfluxDbAutoConfiguration ( InfluxDbProperties properties) {
this . properties = properties;
}
@Bean
public InfluxDBClient influxDBClient ( ) {
OkHttpClient. Builder okHttpBuilder = new OkHttpClient. Builder ( )
. protocols ( Collections . singletonList ( Protocol . HTTP_1_1) )
. readTimeout ( properties. getReadTimeout ( ) )
. writeTimeout ( properties. getWriteTimeout ( ) )
. connectTimeout ( properties. getConnectTimeout ( ) ) ;
InfluxDBClientOptions. Builder influxBuilder = InfluxDBClientOptions . builder ( )
. url ( properties. getUrl ( ) )
. bucket ( properties. getBucket ( ) )
. authenticateToken ( properties. getToken ( ) . toCharArray ( ) )
. org ( properties. getOrg ( ) )
. precision ( WritePrecision . fromValue ( properties. getPrecision ( ) ) )
. okHttpClient ( okHttpBuilder) ;
InfluxDBClientOptions build = influxBuilder. build ( ) ;
return InfluxDBClientFactory . create ( build) . setLogLevel ( properties. getLogLevel ( ) ) ;
}
@Bean
public WriteApi writeApi ( InfluxDBClient influxDBClient) {
WriteOptions options = new WriteOptions. Builder ( )
. batchSize ( 1000 )
. flushInterval ( 1000 )
. jitterInterval ( 1000 )
. retryInterval ( 1000 )
. maxRetries ( 3 )
. maxRetryDelay ( 125000 )
. maxRetryTime ( 180000 )
. exponentialBase ( 2 )
. bufferLimit ( 10000 )
. backpressureStrategy ( BackpressureOverflowStrategy . ERROR)
. build ( ) ;
return influxDBClient. makeWriteApi ( options) ;
}
@Bean
public QueryApi queryApi ( InfluxDBClient influxDBClient) {
return influxDBClient. getQueryApi ( ) ;
}
@Bean
public DeleteApi deleteApi ( InfluxDBClient influxDBClient) {
return influxDBClient. getDeleteApi ( ) ;
}
@Bean
public ThreadPoolTaskExecutor influxDbThreadPoolTaskExecutor ( ) {
DefaultThreadPoolTaskExecutorBuilder builder = new DefaultThreadPoolTaskExecutorBuilder ( ) ;
builder. setThreadNamePrefix ( "influxDb-task-executor-" ) ;
return builder. buildNew ( ) ;
}
}
2. InfluxDbProperties(配置属性类)
package com. tigeriot. influxdbspringboot3starter. influxdbAutoConfiguration ;
import com. influxdb. LogLevel ;
import lombok. Data ;
import org. springframework. boot. context. properties. ConfigurationProperties ;
import java. time. Duration ;
@Data
@ConfigurationProperties ( prefix = "influxdb" )
public class InfluxDbProperties {
private static final int DEFAULT_TIMEOUT = 10_000 ;
private String url;
private String token;
private String org;
private String bucket;
private LogLevel logLevel = LogLevel . NONE;
private Duration readTimeout = Duration . ofMillis ( DEFAULT_TIMEOUT) ;
private Duration writeTimeout = Duration . ofMillis ( DEFAULT_TIMEOUT) ;
private Duration connectTimeout = Duration . ofMillis ( DEFAULT_TIMEOUT) ;
private String precision = "s" ;
private int batchSize = 500 ;
}
3. InfluxDbTemplate(抽象模板类)
package com. tigeriot. influxdbspringboot3starter. template ;
import com. influxdb. client. DeleteApi ;
import com. influxdb. client. QueryApi ;
import com. influxdb. client. WriteApi ;
import com. influxdb. client. domain. WritePrecision ;
import com. influxdb. query. dsl. Flux ;
import com. influxdb. query. dsl. functions. FilterFlux ;
import com. influxdb. query. dsl. functions. restriction. Restrictions ;
import com. tigeriot. globalcommonservice. global. utils. ListUtils ;
import com. tigeriot. influxdbspringboot3starter. influxdbAutoConfiguration. InfluxDbProperties ;
import org. springframework. beans. factory. annotation. Autowired ;
import org. springframework. scheduling. concurrent. ThreadPoolTaskExecutor ;
import java. lang. reflect. ParameterizedType ;
import java. lang. reflect. Type ;
import java. time. Instant ;
import java. time. OffsetDateTime ;
import java. util. * ;
import java. util. concurrent. CompletableFuture ;
import java. util. function. Function ;
import java. util. stream. Collectors ;
public abstract class InfluxDbTemplate < Entity > {
public static final String ROW_TO_COLUMN_FLUX = "|> pivot(rowKey: [\"_time\"], columnKey: [\"_field\"], valueColumn: \"_value\")" ;
private int getBatchSize ( ) {
return influxDbProperties. getBatchSize ( ) ;
}
@Autowired
private WriteApi writeApi;
@Autowired
private InfluxDbProperties influxDbProperties;
@Autowired
private QueryApi queryApi;
@Autowired
private DeleteApi deleteApi;
@Autowired
private ThreadPoolTaskExecutor influxDbThreadPoolTaskExecutor;
private final String measurement;
private final Class < Entity > entityClass;
@SuppressWarnings ( "unchecked" )
public InfluxDbTemplate ( String measurement) {
this . measurement = measurement;
this . entityClass = ( Class < Entity > ) getGenericType ( ) ;
}
private Class < ? > getGenericType ( ) {
Type genericSuperclass = getClass ( ) . getGenericSuperclass ( ) ;
if ( genericSuperclass instanceof ParameterizedType parameterizedType) {
Type [ ] actualTypeArguments = parameterizedType. getActualTypeArguments ( ) ;
if ( actualTypeArguments. length > 0 && actualTypeArguments[ 0 ] instanceof Class ) {
return ( Class < ? > ) actualTypeArguments[ 0 ] ;
}
}
throw new IllegalStateException ( "无法获取泛型类型,请确保子类正确指定泛型参数" ) ;
}
public void add ( Entity entity) {
writeApi. writeMeasurement ( WritePrecision . MS, entity) ;
}
public void addBatch ( List < Entity > entityList) {
if ( entityList == null || entityList. isEmpty ( ) ) {
return ;
}
writeApi. writeMeasurements ( WritePrecision . MS, entityList) ;
}
public List < Entity > queryByFlux ( Flux flux) {
return queryApi. query ( flux. toString ( ) , entityClass) ;
}
private List < Entity > executeQuery ( Instant start, Instant end, Restrictions tagRestriction, Function < Flux , Flux > condition) {
Flux flux = Flux . from ( influxDbProperties. getBucket ( ) ) . range ( start, end) ;
Restrictions measurementRestriction = Restrictions . measurement ( ) . equal ( measurement) ;
Restrictions combinedRestriction = tagRestriction != null
? Restrictions . and ( measurementRestriction, tagRestriction)
: measurementRestriction;
flux = ( ( FilterFlux ) flux. filter ( combinedRestriction) ) ;
if ( condition != null ) {
flux = condition. apply ( flux) ;
}
flux = flux. pivot ( new String [ ] { "_time" } , new String [ ] { "_field" } , "_value" ) ;
return queryByFlux ( flux) ;
}
public List < Entity > queryByFlux ( Instant start, Instant end) {
return queryByFlux ( start, end, null ) ;
}
public List < Entity > queryByFlux ( Instant start, Instant end, Function < Flux , Flux > condition) {
return executeQuery ( start, end, null , condition) ;
}
public List < Entity > queryByTag ( Instant start, Instant end, String tagName, String tagValue) {
return queryByTag ( start, end, tagName, tagValue, null ) ;
}
public List < Entity > queryByTag ( Instant start, Instant end, String tagName, String tagValue, Function < Flux , Flux > condition) {
Restrictions tagRestriction = Restrictions . tag ( tagName) . equal ( tagValue) ;
return executeQuery ( start, end, tagRestriction, condition) ;
}
public List < Entity > queryByTagValues ( Instant start, Instant end, String tagName, Collection < String > values) {
return queryByTagValues ( start, end, tagName, values, null ) ;
}
public List < Entity > queryByTagValues ( Instant start, Instant end, String tagName, Collection < String > values, Function < Flux , Flux > condition) {
if ( values == null || values. isEmpty ( ) ) {
return new ArrayList < > ( ) ;
}
List < String > valueList = new ArrayList < > ( values) ;
if ( valueList. size ( ) == 1 ) {
return queryByTag ( start, end, tagName, valueList. get ( 0 ) , condition) ;
}
if ( valueList. size ( ) <= getBatchSize ( ) ) {
return queryByTagValuesBatch ( start, end, tagName, valueList, condition) ;
}
return queryByTagValuesParallel ( start, end, tagName, valueList, condition) ;
}
private List < Entity > queryByTagValuesBatch ( Instant start, Instant end, String tagName, List < String > values, Function < Flux , Flux > condition) {
Restrictions tagOrCondition = buildOrRestrictions ( tagName, values) ;
return executeQuery ( start, end, tagOrCondition, condition) ;
}
private List < Entity > queryByTagValuesParallel ( Instant start, Instant end, String tagName, List < String > values, Function < Flux , Flux > condition) {
List < List < String > > batches = ListUtils . divideList ( values, getBatchSize ( ) ) ;
List < CompletableFuture < List < Entity > > > futures = batches. stream ( )
. map ( batch -> CompletableFuture . supplyAsync (
( ) -> queryByTagValuesBatch ( start, end, tagName, batch, condition) ,
influxDbThreadPoolTaskExecutor
) )
. collect ( Collectors . toList ( ) ) ;
return futures. stream ( )
. map ( CompletableFuture :: join )
. flatMap ( List :: stream )
. collect ( Collectors . toList ( ) ) ;
}
public List < Entity > queryByMultiTagValues ( Instant start, Instant end, Map < String , Collection < String > > tagValuesMap) {
return queryByMultiTagValues ( start, end, tagValuesMap, null ) ;
}
public List < Entity > queryByMultiTagValues ( Instant start, Instant end, Map < String , Collection < String > > tagValuesMap, Function < Flux , Flux > condition) {
if ( tagValuesMap == null || tagValuesMap. isEmpty ( ) ) {
return queryByFlux ( start, end, condition) ;
}
String largestTagName = null ;
int largestSize = 0 ;
for ( Map. Entry < String , Collection < String > > entry : tagValuesMap. entrySet ( ) ) {
if ( entry. getValue ( ) != null && entry. getValue ( ) . size ( ) > largestSize) {
largestSize = entry. getValue ( ) . size ( ) ;
largestTagName = entry. getKey ( ) ;
}
}
if ( largestSize > getBatchSize ( ) ) {
return queryByMultiTagValuesParallel ( start, end, tagValuesMap, largestTagName, condition) ;
}
return queryByMultiTagValuesBatch ( start, end, tagValuesMap, condition) ;
}
private List < Entity > queryByMultiTagValuesBatch ( Instant start, Instant end, Map < String , Collection < String > > tagValuesMap, Function < Flux , Flux > condition) {
List < Restrictions > allTagRestrictions = new ArrayList < > ( ) ;
for ( Map. Entry < String , Collection < String > > entry : tagValuesMap. entrySet ( ) ) {
String tagName = entry. getKey ( ) ;
Collection < String > values = entry. getValue ( ) ;
if ( values == null || values. isEmpty ( ) ) {
continue ;
}
List < String > valueList = new ArrayList < > ( values) ;
if ( valueList. size ( ) == 1 ) {
allTagRestrictions. add ( Restrictions . tag ( tagName) . equal ( valueList. get ( 0 ) ) ) ;
} else {
allTagRestrictions. add ( buildOrRestrictions ( tagName, valueList) ) ;
}
}
if ( allTagRestrictions. isEmpty ( ) ) {
return executeQuery ( start, end, null , condition) ;
}
Restrictions combinedRestriction = allTagRestrictions. size ( ) == 1
? allTagRestrictions. get ( 0 )
: Restrictions . and ( allTagRestrictions. toArray ( new Restrictions [ 0 ] ) ) ;
return executeQuery ( start, end, combinedRestriction, condition) ;
}
private List < Entity > queryByMultiTagValuesParallel ( Instant start, Instant end, Map < String , Collection < String > > tagValuesMap, String largeTagName, Function < Flux , Flux > condition) {
Collection < String > largeTagValues = tagValuesMap. get ( largeTagName) ;
List < List < String > > batches = ListUtils . divideList ( new ArrayList < > ( largeTagValues) , getBatchSize ( ) ) ;
Map < String , Collection < String > > otherTagsMap = new LinkedHashMap < > ( ) ;
for ( Map. Entry < String , Collection < String > > entry : tagValuesMap. entrySet ( ) ) {
if ( ! entry. getKey ( ) . equals ( largeTagName) ) {
otherTagsMap. put ( entry. getKey ( ) , entry. getValue ( ) ) ;
}
}
List < CompletableFuture < List < Entity > > > futures = batches. stream ( )
. map ( batch -> {
Map < String , Collection < String > > batchTagMap = new LinkedHashMap < > ( otherTagsMap) ;
batchTagMap. put ( largeTagName, batch) ;
return CompletableFuture . supplyAsync (
( ) -> queryByMultiTagValuesBatch ( start, end, batchTagMap, condition) ,
influxDbThreadPoolTaskExecutor
) ;
} )
. collect ( Collectors . toList ( ) ) ;
return futures. stream ( )
. map ( CompletableFuture :: join )
. flatMap ( List :: stream )
. collect ( Collectors . toList ( ) ) ;
}
public void delete ( OffsetDateTime start, OffsetDateTime stop) {
delete ( start, stop, "_measurement='" + measurement + "'" ) ;
}
public void delete ( OffsetDateTime start, OffsetDateTime stop, String predicate) {
deleteApi. delete ( start, stop, predicate, influxDbProperties. getBucket ( ) , influxDbProperties. getOrg ( ) ) ;
}
private Restrictions buildOrRestrictions ( String tagName, List < String > values) {
if ( values. size ( ) == 1 ) {
return Restrictions . tag ( tagName) . equal ( values. get ( 0 ) ) ;
}
Restrictions [ ] restrictions = values. stream ( )
. map ( value -> Restrictions . tag ( tagName) . equal ( value) )
. toArray ( Restrictions [ ] :: new ) ;
return Restrictions . or ( restrictions) ;
}
}
4. 业务 Repository 示例
package com. tigeriot. uicomponentcommon. customEcharts. historyCurveEcharts. rundevparam. influxRepository ;
import com. tigeriot. globalcommonservice. model. valuemanage. field. SysFieldConst ;
import com. tigeriot. influxdbspringboot3starter. template. InfluxDbTemplate ;
import com. tigeriot. productmanagerserviceapiclient. model. iotdev. rundev. entity. MeasureCtlDevice ;
import org. springframework. stereotype. Repository ;
import java. time. Instant ;
import java. time. temporal. ChronoUnit ;
import java. util. Date ;
import java. util. List ;
import java. util. Set ;
@Repository
public class RunDevParamInfluxDbRepository extends InfluxDbTemplate < MeasureCtlDevice > {
public RunDevParamInfluxDbRepository ( ) {
super ( MeasureCtlDevice . MEASUREMENT) ;
}
public List < MeasureCtlDevice > queryBy24HourMean ( ) {
return this . queryByFlux (
Instant . now ( ) . minus ( 24 , ChronoUnit . HOURS) ,
Instant . now ( ) ,
flux -> flux. aggregateWindow ( 20L , ChronoUnit . MINUTES, "mean" )
) ;
}
public List < MeasureCtlDevice > queryById ( Instant start, Instant end, String id) {
return this . queryByTag ( start, end, SysFieldConst . ID, id) ;
}
public List < MeasureCtlDevice > queryByIdList ( Date start, Date end, Set < String > idSet) {
return this . queryByTagValues ( start. toInstant ( ) , end. toInstant ( ) , SysFieldConst . ID, idSet) ;
}
public List < MeasureCtlDevice > queryByIdListSmart ( Date start, Date end, Set < String > idSet) {
Instant startInstant = start. toInstant ( ) ;
Instant endInstant = end. toInstant ( ) ;
long days = java. time. Duration. between ( startInstant, endInstant) . toDays ( ) ;
if ( days > 7 ) {
return this . queryByTagValues ( startInstant, endInstant, SysFieldConst . ID, idSet,
flux -> flux. aggregateWindow ( 1L , ChronoUnit . HOURS, "mean" )
) ;
} else {
return this . queryByTagValues ( startInstant, endInstant, SysFieldConst . ID, idSet) ;
}
}
public List < MeasureCtlDevice > queryByIdListWithAggregation ( Date start, Date end, Set < String > idSet,
long windowDuration, ChronoUnit unit, String aggregateFn) {
return this . queryByTagValues (
start. toInstant ( ) ,
end. toInstant ( ) ,
SysFieldConst . ID,
idSet,
flux -> flux. aggregateWindow ( windowDuration, unit, aggregateFn)
) ;
}
}