String and Descriptors

String and Descriptors

in

The choice of Symbian not to use common types and functions for handling strings and binary buffer may look quite surprising for many developers new to this platform. And probably more than one developer has spent a couple of hours wondering about the respective merits of Tbuf, TbufC, HbufC,...

The main characteristics of Symbian descriptors are:
-  strings and binary data are threated the same way
-  data can reside in any memory location, either ROM or RAM, on the stack or on the heap
-  a descriptor object maintains pointer and length information to describe its data. Some descriptors also include a maximum length

The following diagram shows the descriptor classes hierarchy:

descriptors.png

All descriptors inherits from the abstract class TDesC. They come in three types:
-  Buffer descriptors - where the data is part of the descriptor object and the descriptor object lives on the program stack: TBufand TBufC,
-  Heap descriptors - where the data is part of the descriptor object and the descriptor object lives on the heap: HBufC,
-  Pointer descriptors - where the descriptor object is separate from the data it represents: TPtr and TPtrC.

An analogy with classical C/C++ gives:
-  TPtrC should be used where const char * used to
-  TBufCwhere char [] used to.

The other classes have no equivalent.

Here is how data are organized within each class:

descriptors3.png

TDes and TDesC are abstract classes so you cannot instantiate them. Their main use is in as argument of function manipulating strings and binary data. In such function, you will use:
-  const TDesC& for read-only string or data
-  TDes& for passing string or data you want to modify.

All the descriptors can have some width specifier: TDes8, TDes16, TDesC8, TDesC16, TBuf8, TBuf16,... 8 specifies that the descriptor handle 8 bits data, and 16 stands for 16 bits. Generally, you will use the neutral form (TDes, TDesC,...) for text data and the 8 bits version (TDesc8,...) for binary content.

Litterals

Strings constant are often defined using the _L() or _LIT() macros.

_L() produces a TPtrC from a literal value. It is typically used to pass a string to a function:

NEikonEnvironment::MessageBox(_L("Error: init file not found!"));

_LIT() generates named constant that can be reused across your application:

_LIT(KMyFile,"c://System//Apps//MyApp//MyFile.jpg");

The result of the _LIT() macro (KmyFile on the example above) is a literal descriptor TLitC which can be used wherever a TDesC& could.

Usage

The most commonly used function are defined in TDesC. They are:
-  Ptr() to get a pointer on the descriptor data
-  Length() to get the number of characters in the descriptor data.
-  Size() to get the number of bytes in the descriptor data.
-  Compare() or operators ==, !=, >= and <= for descriptor data comparison
-  the operator [] can be used - as in C/C++ - to get a single char from a descriptor string

Several functions pictured above comes in several flavors:
-  Append() and Num() have several variants that cannot been described here but are well described in the Symbian Developer Library.
-  Compare()has variants named CompareC() and CompareF(). So do Copy(), Find(), Locate(), and Match() : all of these have a C and/or F variant. C stands for Collatedand F for Folded.

Collating and Folding

Folding is a relatively simple way of normalizing text for comparison by removing case distinctions, converting accented characters to characters without accents etc. Folding is typically used for tolerant comparisons

Collation is a much better and more powerful way to compare strings and produces a dictionary-like ('lexicographic') ordering. For languages using the Latin script, for example, collation is about deciding whether to ignore punctuation, whether to fold upper and lower case, how to treat accents, and so on. In a given locale there is usually a standard set of collation rules that can be used.

 
/** * @brief Configuration structure of the TinyUSB core * * USB specification mandates self-powered devices to monitor USB VBUS to detect connection/disconnection events. * If you want to use this feature, connected VBUS to any free GPIO through a voltage divider or voltage comparator. * The voltage divider output should be (0.75 * Vdd) if VBUS is 4.4V (lowest valid voltage at device port). * The comparator thresholds should be set with hysteresis: 4.35V (falling edge) and 4.75V (raising edge). */ typedef struct { union { const tusb_desc_device_t *device_descriptor; /*!< Pointer to a device descriptor. If set to NULL, the TinyUSB device will use a default device descriptor whose values are set in Kconfig */ const tusb_desc_device_t *descriptor __attribute__((deprecated)); /*!< Alias to `device_descriptor` for backward compatibility */ }; const char **string_descriptor; /*!< Pointer to array of string descriptors. If set to NULL, TinyUSB device will use a default string descriptors whose values are set in Kconfig */ int string_descriptor_count; /*!< Number of descriptors in above array */ bool external_phy; /*!< Should USB use an external PHY */ union { struct { const uint8_t *configuration_descriptor; /*!< Pointer to a configuration descriptor. If set to NULL, TinyUSB device will use a default configuration descriptor whose values are set in Kconfig */ }; #if (TUD_OPT_HIGH_SPEED) struct { const uint8_t *fs_configuration_descriptor; /*!< Pointer to a FullSpeed configuration descriptor. If set to NULL, TinyUSB device will use a default configuration descriptor whose values are set in Kconfig */ }; }; const uint8_t *hs_configuration_descriptor; /*!< Pointer to a HighSpeed configuration descriptor. If set to NULL, TinyUSB device will use a default configuration descriptor whos
03-27
package com.tongchuang.realtime.mds; import com.alibaba.fastjson.JSON; import com.alibaba.fastjson.JSONObject; import com.tongchuang.realtime.bean.ULEParamConfig; import com.tongchuang.realtime.util.KafkaUtils; import org.apache.flink.api.common.eventtime.WatermarkStrategy; import org.apache.flink.api.common.state.*; import org.apache.flink.api.common.typeinfo.BasicTypeInfo; import org.apache.flink.api.common.typeinfo.TypeHint; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.configuration.Configuration; import org.apache.flink.connector.kafka.source.KafkaSource; import org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer; import org.apache.flink.streaming.api.datastream.*; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction; import org.apache.flink.streaming.api.functions.source.RichSourceFunction; import org.apache.flink.util.Collector; import java.sql.*; import java.text.SimpleDateFormat; import java.util.*; import java.util.Date; import java.util.concurrent.TimeUnit; public class ULEDataanomalyanalysis { public static void main(String[] args) throws Exception { final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); KafkaSource<String> kafkaConsumer = KafkaUtils.getKafkaConsumer("realdata_minute", "minutedata_uledataanomalyanalysis", OffsetsInitializer.latest()); DataStreamSource<String> kafkaDS = env.fromSource(kafkaConsumer, WatermarkStrategy.noWatermarks(), "realdata_uledataanomalyanalysis"); // 解析JSON并拆分每个tag的数据 SingleOutputStreamOperator<JSONObject> splitStream = kafkaDS.map(JSON::parseObject) .flatMap((JSONObject value, Collector<JSONObject> out) -> { JSONObject data = value.getJSONObject("datas"); String time = value.getString("times"); for (String tag : data.keySet()) { JSONObject tagData = data.getJSONObject(tag); JSONObject newObj = new JSONObject(); newObj.put("time", time); newObj.put("tag", tag); newObj.put("ontime", tagData.getDouble("ontime")); newObj.put("avg", tagData.getDouble("avg")); out.collect(newObj); } }) .returns(TypeInformation.of(JSONObject.class)); // 每5分钟加载参数配置 - 创建DataStream DataStream<Map<String, ULEParamConfig>> configDataStream = env .addSource(new MysqlConfigSource()) .setParallelism(1); // 将DataStream转换为BroadcastStream BroadcastStream<Map<String, ULEParamConfig>> configBroadcastStream = configDataStream.broadcast(Descriptors.configStateDescriptor); // 按tag分组并连接广播流 KeyedStream<JSONObject, String> keyedStream = splitStream .keyBy(json -> json.getString("tag")); BroadcastConnectedStream<JSONObject, Map<String, ULEParamConfig>> connectedStream = keyedStream.connect(configBroadcastStream); // 异常检测处理 SingleOutputStreamOperator<JSONObject> anomalyStream = connectedStream .process(new AnomalyDetectionFunction()); anomalyStream.print("异常检测结果"); env.execute("uledataanomalyanalysis"); } // MySQL配置源 public static class MysqlConfigSource extends RichSourceFunction<Map<String, ULEParamConfig>> { private volatile boolean isRunning = true; private final long interval = TimeUnit.MINUTES.toMillis(5); @Override public void run(SourceContext<Map<String, ULEParamConfig>> ctx) throws Exception { while (isRunning) { ctx.collect(loadParams()); Thread.sleep(interval); } } private Map<String, ULEParamConfig> loadParams() { Map<String, ULEParamConfig> configMap = new HashMap<>(); String url = "jdbc:mysql://10.51.37.73:3306/eps?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8"; String user = "root"; String password = "6CKIm5jDVsLrahSw"; String query = "select F_tag tag , F_enCode encode, F_dataTypes datatype,F_isConstantValue constantvalue,F_isOnline isonline,F_isSync issync,F_syncParaEncode syncparaencode,\n" + "F_runTag runtag,F_isZero iszero,F_isHigh ishigh,F_highThreshold highthreshold,F_isLow islow,F_lowThreshold lowthreshold, F_duration duration \n" + "from t_equipmentparameter\n" + "where F_enabledmark = '1' and (F_isConstantValue ='1' or F_isZero= '1' or F_isHigh = '1' or F_isLow = '1' or F_isOnline = '1' or F_isSync = '1' )"; try (Connection conn = DriverManager.getConnection(url, user, password); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery(query)) { while (rs.next()) { ULEParamConfig config = new ULEParamConfig(); config.encode = rs.getString("encode"); config.datatype = rs.getString("datatype"); config.constantvalue = rs.getInt("constantvalue"); config.iszero = rs.getInt("iszero"); config.ishigh = rs.getInt("ishigh"); config.highthreshold = rs.getDouble("highthreshold"); config.islow = rs.getInt("islow"); config.lowthreshold = rs.getDouble("lowthreshold"); config.duration = rs.getLong("duration"); // 使用runtag优先,没有则用tag String runtag = rs.getString("runtag"); String tag = rs.getString("tag"); String key = (runtag != null && !runtag.isEmpty()) ? runtag : tag; configMap.put(key, config); } System.out.println("Loaded " + configMap.size() + " parameter configurations"); } catch (SQLException e) { System.err.println("Error loading parameters from MySQL:"); e.printStackTrace(); } return configMap; } @Override public void cancel() { isRunning = false; } } // 状态描述符 public static class Descriptors { public static final MapStateDescriptor<Void, Map<String, ULEParamConfig>> configStateDescriptor = new MapStateDescriptor<>( "configState", TypeInformation.of(Void.class), TypeInformation.of(new TypeHint<Map<String, ULEParamConfig>>() {}) ); } // 异常检测函数 public static class AnomalyDetectionFunction extends KeyedBroadcastProcessFunction<String, JSONObject, Map<String, ULEParamConfig>, JSONObject> { private transient MapState<String, AnomalyState> stateMap; private transient SimpleDateFormat timeFormat; @Override public void open(Configuration parameters) { MapStateDescriptor<String, AnomalyState> stateDesc = new MapStateDescriptor<>( "anomalyState", BasicTypeInfo.STRING_TYPE_INFO, TypeInformation.of(AnomalyState.class) ); stateMap = getRuntimeContext().getMapState(stateDesc); timeFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm"); } @Override public void processElement(JSONObject data, ReadOnlyContext ctx, Collector<JSONObject> out) throws Exception { String tag = ctx.getCurrentKey(); String timeStr = data.getString("time"); // 获取广播配置 ReadOnlyBroadcastState<Void, Map<String, ULEParamConfig>> broadcastState = ctx.getBroadcastState(Descriptors.configStateDescriptor); Map<String, ULEParamConfig> configMap = broadcastState.get(null); if (configMap == null) { System.out.println("No config available for tag: " + tag); return; } ULEParamConfig config = configMap.get(tag); if (config == null) { // 没有该tag的配置 return; } // 获取当前值 double value = "436887485805570949".equals(config.datatype) ? data.getDouble("ontime") : data.getDouble("avg"); // 获取或初始化状态 AnomalyState state = stateMap.get(tag); if (state == null) { state = new AnomalyState(); } // 处理异常类型 checkAndReportAnomaly(config.constantvalue, 1, value, timeStr, config, state, tag, out); checkAndReportAnomaly(config.iszero, 2, value, timeStr, config, state, tag, out); checkAndReportAnomaly(config.ishigh, 3, value, timeStr, config, state, tag, out); checkAndReportAnomaly(config.islow, 4, value, timeStr, config, state, tag, out); // 保存状态 stateMap.put(tag, state); } private void checkAndReportAnomaly(int enabled, int anomalyType, double currentValue, String timeStr, ULEParamConfig config, AnomalyState state, String tag, Collector<JSONObject> out) { if (enabled != 1) return; try { AnomalyStatus status = state.getStatus(anomalyType); long durationThreshold = config.duration * 60 * 1000; // 分钟转毫秒 Date timestamp = timeFormat.parse(timeStr); // 检查异常条件 boolean isAnomaly = false; switch (anomalyType) { case 1: // 恒值检测 if (status.lastValue == null) { status.lastValue = currentValue; status.lastChangeTime = timestamp; } else if (Math.abs(currentValue - status.lastValue) > 0.0) { // 值发生变化 status.lastValue = currentValue; status.lastChangeTime = timestamp; } isAnomaly = (timestamp.getTime() - status.lastChangeTime.getTime()) > durationThreshold; break; case 2: // 零值 isAnomaly = Math.abs(currentValue) == 0.0; // 接近0 break; case 3: // 高阈值 isAnomaly = currentValue > config.highthreshold; break; case 4: // 低阈值 isAnomaly = currentValue < config.lowthreshold; break; } // 状态转换处理 if (isAnomaly) { if (status.startTime == null) { status.startTime = timestamp; System.out.println("[" + tag + "] Anomaly started at " + timeStr); } else if (!status.reported && (timestamp.getTime() - status.startTime.getTime()) >= durationThreshold) { // 达到持续时长,报告异常 reportAnomaly(anomalyType, 1, currentValue, timeStr, config, out); status.reported = true; System.out.println("[" + tag + "] Reported anomaly type " + anomalyType); } } else if (status.startTime != null) { // 异常恢复 if (status.reported) { reportAnomaly(anomalyType, 0, currentValue, timeStr, config, out); System.out.println("[" + tag + "] Anomaly recovered"); } status.reset(); } } catch (Exception e) { System.err.println("Error processing anomaly for tag " + tag + ": " + e.getMessage()); e.printStackTrace(); } } private void reportAnomaly(int anomalyType, int statusFlag, double value, String time, ULEParamConfig config, Collector<JSONObject> out) { JSONObject event = new JSONObject(); event.put("paracode", config.encode); event.put("abnormaltype", anomalyType); event.put("statusflag", statusFlag); event.put("datavalue", value); event.put("triggertime", time); out.collect(event); } @Override public void processBroadcastElement(Map<String, ULEParamConfig> newConfig, Context ctx, Collector<JSONObject> out) { BroadcastState<Void, Map<String, ULEParamConfig>> state = ctx.getBroadcastState(Descriptors.configStateDescriptor); try { state.put(null, newConfig); System.out.println("Updated configuration: " + newConfig.size() + " parameters"); } catch (Exception e) { System.err.println("Failed to update broadcast state:"); e.printStackTrace(); } } } // 异常状态类 public static class AnomalyState { private final Map<Integer, AnomalyStatus> statusMap = new HashMap<>(); public AnomalyStatus getStatus(int type) { return statusMap.computeIfAbsent(type, k -> new AnomalyStatus()); } } // 异常状态详情 public static class AnomalyStatus { public Date startTime; // 异常开始时间 public Double lastValue; // 用于恒值检测 public Date lastChangeTime; // 值最后变化时间 public boolean reported; // 是否已报告 public void reset() { startTime = null; reported = false; // 保留lastValue和lastChangeTime用于恒值检测 } } }在上述代码基础上完善,需要每5分钟从数据库加载参数配置currentParams,在分钟流数据获取时,先赛选保留参数配置中所有的tag或runtag值,然后根据每条配置,对每分钟的数据进行恒值(数值不变化)、0值(数值为0.0)、上下阈值(超过上阈值、低于下阈值)、离线(没有改tag的分钟数据)、同步(该点位值为1.0,但根据该点位对应的syncparaencode查找encode对应的tag值为0.0)异常分析,当出现异常,且异常持续时间超过配置的duration值时,生成异常消息,异常恢复时也生成消息,发送kafka,格式为paracode、abnormaltype、statusflag、datavalue、triggertime。具体数据取值根据配置数据类型datatype为436887485805570949的,取ontime,其他取avg
07-30
上面提出DA-LT-4BT0007点位离线,但未生成离线异常,请分析原因后生成完整修改后代码。生成package com.tongchuang.realtime.mds; import com.alibaba.fastjson.JSON; import com.alibaba.fastjson.JSONObject; import com.alibaba.fastjson.JSONObject; import com.tongchuang.realtime.bean.ULEParamConfig; import com.tongchuang.realtime.util.KafkaUtils; import org.apache.flink.api.common.eventtime.WatermarkStrategy; import org.apache.flink.api.common.state.*; import org.apache.flink.api.common.time.Time; import org.apache.flink.api.common.typeinfo.BasicTypeInfo; import org.apache.flink.api.common.typeinfo.TypeHint; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.configuration.Configuration; import org.apache.flink.connector.kafka.source.KafkaSource; import org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer; import org.apache.flink.streaming.api.datastream.*; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction; import org.apache.flink.streaming.api.functions.source.RichSourceFunction; import org.apache.flink.util.Collector; import org.apache.flink.util.OutputTag; import java.io.Serializable; import java.sql.*; import java.text.ParseException; import java.text.SimpleDateFormat; import java.time.Duration; import java.util.*; import java.util.Date; import java.util.concurrent.TimeUnit; public class ULEDataanomalyanalysis { public static void main(String[] args) throws Exception { final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); env.getConfig().setAutoWatermarkInterval(1000); // Kafka消费者配置 KafkaSource<String> kafkaConsumer = KafkaUtils.getKafkaConsumer( "realdata_minute", "minutedata_uledataanomalyanalysis", OffsetsInitializer.latest() ); // 修改Watermark策略 DataStreamSource<String> kafkaDS = env.fromSource( kafkaConsumer, WatermarkStrategy.<String>forBoundedOutOfOrderness(Duration.ofMinutes(1)) .withTimestampAssigner((event, timestamp) -> { try { JSONObject json = JSON.parseObject(event); return new SimpleDateFormat("yyyy-MM-dd HH:mm").parse(json.getString("times")).getTime(); } catch (ParseException e) { return System.currentTimeMillis(); } }), "realdata_uledataanomalyanalysis" ); kafkaDS.print("分钟数据流"); // 解析JSON并拆分tag SingleOutputStreamOperator<JSONObject> splitStream = kafkaDS .map(JSON::parseObject) .returns(TypeInformation.of(JSONObject.class)) .flatMap((JSONObject value, Collector<JSONObject> out) -> { JSONObject data = value.getJSONObject("datas"); String time = value.getString("times"); for (String tag : data.keySet()) { JSONObject tagData = data.getJSONObject(tag); JSONObject newObj = new JSONObject(); newObj.put("time", time); newObj.put("tag", tag); newObj.put("ontime", tagData.getDouble("ontime")); newObj.put("avg", tagData.getDouble("avg")); out.collect(newObj); } }) .returns(TypeInformation.of(JSONObject.class)) .name("Split-By-Tag"); // 配置数据源 DataStream<ConfigCollection> configDataStream = env .addSource(new MysqlConfigSource()) .setParallelism(1) .filter(Objects::nonNull) .name("Config-Source"); // 标签值数据流 DataStream<Map<String, Object>> tagValueStream = splitStream .map(json -> { Map<String, Object> valueMap = new HashMap<>(); valueMap.put("tag", json.getString("tag")); valueMap.put("value", "436887485805570949".equals(json.getString("datatype")) ? json.getDouble("ontime") : json.getDouble("avg")); return valueMap; }) .returns(new TypeHint<Map<String, Object>>() {}) .name("Tag-Value-Stream"); // 合并配置流和标签值流 DataStream<Object> broadcastStream = configDataStream .map(config -> (Object) config) .returns(TypeInformation.of(Object.class)) .union( tagValueStream.map(tagValue -> (Object) tagValue) .returns(TypeInformation.of(Object.class)) ); // 广播流 BroadcastStream<Object> finalBroadcastStream = broadcastStream .broadcast(Descriptors.configStateDescriptor, Descriptors.tagValuesDescriptor); // 按tag分组 KeyedStream<JSONObject, String> keyedStream = splitStream .keyBy(json -> json.getString("tag")); BroadcastConnectedStream<JSONObject, Object> connectedStream = keyedStream.connect(finalBroadcastStream); // 异常检测处理 SingleOutputStreamOperator<JSONObject> anomalyStream = connectedStream .process(new OptimizedAnomalyDetectionFunction()) .name("Anomaly-Detection"); anomalyStream.print("异常检测结果"); // 离线检测结果侧输出 DataStream<String> offlineCheckStream = anomalyStream.getSideOutput(OptimizedAnomalyDetectionFunction.OFFLINE_CHECK_TAG); offlineCheckStream.print("离线检测结果"); env.execute("uledataanomalyanalysis"); } // 配置集合类 public static class ConfigCollection implements Serializable { private static final long serialVersionUID = 1L; public final Map<String, List<ULEParamConfig>> tagToConfigs; public final Map<String, ULEParamConfig> encodeToConfig; public final Set<String> allTags; public final long checkpointTime; public ConfigCollection(Map<String, List<ULEParamConfig>> tagToConfigs, Map<String, ULEParamConfig> encodeToConfig) { this.tagToConfigs = new HashMap<>(tagToConfigs); this.encodeToConfig = new HashMap<>(encodeToConfig); this.allTags = new HashSet<>(tagToConfigs.keySet()); this.checkpointTime = System.currentTimeMillis(); } } // MySQL配置源 public static class MysqlConfigSource extends RichSourceFunction<ConfigCollection> { private volatile boolean isRunning = true; private final long interval = TimeUnit.MINUTES.toMillis(5); @Override public void run(SourceContext<ConfigCollection> ctx) throws Exception { while (isRunning) { ConfigCollection newConfig = loadParams(); if (newConfig != null) { ctx.collect(newConfig); System.out.println("配置加载完成,检查点时间: " + new Date(newConfig.checkpointTime)); } else { System.out.println("配置加载失败"); } Thread.sleep(interval); } } private ConfigCollection loadParams() { Map<String, List<ULEParamConfig>> tagToConfigs = new HashMap<>(5000); Map<String, ULEParamConfig> encodeToConfig = new HashMap<>(5000); String url = "jdbc:mysql://10.51.37.73:3306/eps?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8"; String user = "root"; String password = "6CKIm5jDVsLrahSw"; String query = "SELECT F_tag AS tag, F_enCode AS encode, F_dataTypes AS datatype, " + "F_isConstantValue AS constantvalue, F_isOnline AS isonline, " + "F_isSync AS issync, F_syncParaEnCode AS syncparaencode, " + "F_isZero AS iszero, F_isHigh AS ishigh, F_highThreshold AS highthreshold, " + "F_isLow AS islow, F_lowThreshold AS lowthreshold, F_duration AS duration " + "FROM t_equipmentparameter " + "WHERE F_enabledmark = '1' AND (F_isConstantValue='1' OR F_isZero='1' " + "OR F_isHigh='1' OR F_isLow='1' OR F_isOnline='1' OR F_isSync='1')"; try (Connection conn = DriverManager.getConnection(url, user, password); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery(query)) { while (rs.next()) { ULEParamConfig config = new ULEParamConfig(); config.tag = rs.getString("tag"); config.encode = rs.getString("encode"); config.datatype = rs.getString("datatype"); config.constantvalue = rs.getInt("constantvalue"); config.iszero = rs.getInt("iszero"); config.ishigh = rs.getInt("ishigh"); config.highthreshold = rs.getDouble("highthreshold"); config.islow = rs.getInt("islow"); config.lowthreshold = rs.getDouble("lowthreshold"); config.duration = rs.getLong("duration"); config.isonline = rs.getInt("isonline"); config.issync = rs.getInt("issync"); config.syncparaencode = rs.getString("syncparaencode"); if (config.encode == null || config.encode.isEmpty()) { System.err.println("忽略无效配置: 空encode"); continue; } String tag = config.tag; tagToConfigs.computeIfAbsent(tag, k -> new ArrayList<>(10)).add(config); encodeToConfig.put(config.encode, config); } System.out.println("加载配置: " + encodeToConfig.size() + " 个参数"); return new ConfigCollection(tagToConfigs, encodeToConfig); } catch (SQLException e) { System.err.println("加载参数配置错误:"); e.printStackTrace(); return null; } } @Override public void cancel() { isRunning = false; } } // 状态描述符 public static class Descriptors { public static final MapStateDescriptor<Void, ConfigCollection> configStateDescriptor = new MapStateDescriptor<>( "configState", TypeInformation.of(Void.class), TypeInformation.of(ConfigCollection.class) ); public static final MapStateDescriptor<String, Double> tagValuesDescriptor = new MapStateDescriptor<>( "tagValuesBroadcastState", BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.DOUBLE_TYPE_INFO ); } // 优化后的异常检测函数(重点修复离线检测) public static class OptimizedAnomalyDetectionFunction extends KeyedBroadcastProcessFunction<String, JSONObject, Object, JSONObject> { public static final OutputTag<String> OFFLINE_CHECK_TAG = new OutputTag<String>("offline-check"){}; // 状态管理 private transient MapState<String, AnomalyState> stateMap; private transient MapState<String, Double> lastValuesMap; private transient MapState<String, Long> lastDataTimeMap; private transient MapState<String, Long> offlineTimerState; private transient MapState<String, Long> tagInitTimeState; // 新增:标签初始化时间状态 private transient SimpleDateFormat timeFormat; private transient long lastSyncLogTime = 0; @Override public void open(Configuration parameters) { StateTtlConfig ttlConfig = StateTtlConfig.newBuilder(Time.days(30)) .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite) .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired) .cleanupFullSnapshot() .build(); // 异常状态存储 MapStateDescriptor<String, AnomalyState> stateDesc = new MapStateDescriptor<>( "anomalyState", BasicTypeInfo.STRING_TYPE_INFO, TypeInformation.of(AnomalyState.class) ); stateDesc.enableTimeToLive(ttlConfig); stateMap = getRuntimeContext().getMapState(stateDesc); // 最新值存储 MapStateDescriptor<String, Double> valuesDesc = new MapStateDescriptor<>( "lastValuesState", BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.DOUBLE_TYPE_INFO ); valuesDesc.enableTimeToLive(ttlConfig); lastValuesMap = getRuntimeContext().getMapState(valuesDesc); // 最后数据时间存储 MapStateDescriptor<String, Long> timeDesc = new MapStateDescriptor<>( "lastDataTimeState", BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.LONG_TYPE_INFO ); timeDesc.enableTimeToLive(ttlConfig); lastDataTimeMap = getRuntimeContext().getMapState(timeDesc); // 离线检测定时器状态 MapStateDescriptor<String, Long> timerDesc = new MapStateDescriptor<>( "offlineTimerState", BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.LONG_TYPE_INFO ); timerDesc.enableTimeToLive(ttlConfig); offlineTimerState = getRuntimeContext().getMapState(timerDesc); // 新增:标签初始化时间状态 MapStateDescriptor<String, Long> initTimeDesc = new MapStateDescriptor<>( "tagInitTimeState", BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.LONG_TYPE_INFO ); initTimeDesc.enableTimeToLive(ttlConfig); tagInitTimeState = getRuntimeContext().getMapState(initTimeDesc); timeFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm"); } @Override public void processElement(JSONObject data, ReadOnlyContext ctx, Collector<JSONObject> out) throws Exception { String tag = ctx.getCurrentKey(); String timeStr = data.getString("time"); long eventTime = timeFormat.parse(timeStr).getTime(); // 更新最后数据时间 lastDataTimeMap.put(tag, eventTime); // 获取广播配置 ConfigCollection configCollection = getBroadcastConfig(ctx); if (configCollection == null) return; List<ULEParamConfig> configs = configCollection.tagToConfigs.get(tag); if (configs == null || configs.isEmpty()) return; // 清理无效状态 List<String> keysToRemove = new ArrayList<>(); for (String encode : stateMap.keys()) { boolean found = false; for (ULEParamConfig cfg : configs) { if (cfg.encode.equals(encode)) { found = true; break; } } if (!found) { System.out.println("清理过期状态: " + encode); keysToRemove.add(encode); } } for (String encode : keysToRemove) { stateMap.remove(encode); } double value = 0; boolean valueSet = false; // 检查是否需要离线检测 boolean hasOnlineConfig = false; long minDuration = Long.MAX_VALUE; for (ULEParamConfig config : configs) { if (config.isonline == 1) { hasOnlineConfig = true; minDuration = Math.min(minDuration, config.duration); // 初始化标签检测时间(如果尚未初始化) if (!tagInitTimeState.contains(tag)) { tagInitTimeState.put(tag, System.currentTimeMillis()); } } } // 管理离线检测定时器 if (hasOnlineConfig) { // 删除现有定时器 Long currentTimer = offlineTimerState.get(tag); if (currentTimer != null) { ctx.timerService().deleteEventTimeTimer(currentTimer); } // 注册新定时器(使用最小duration) long offlineTimeout = eventTime + minDuration * 60 * 1000; ctx.timerService().registerEventTimeTimer(offlineTimeout); offlineTimerState.put(tag, offlineTimeout); // 重置离线状态(数据到达表示恢复) for (ULEParamConfig config : configs) { if (config.isonline == 1) { AnomalyState state = getOrCreateState(config.encode); AnomalyStatus status = state.getStatus(6); if (status.reported) { reportAnomaly(6, 0, 0.0, timeStr, config, out); status.reset(); stateMap.put(config.encode, state); System.out.println("离线恢复: tag=" + tag + ", encode=" + config.encode); } } } } // 遍历配置项进行异常检测 for (ULEParamConfig config : configs) { if (!valueSet) { value = "436887485805570949".equals(config.datatype) ? data.getDouble("ontime") : data.getDouble("avg"); lastValuesMap.put(tag, value); valueSet = true; } AnomalyState state = getOrCreateState(config.encode); // 处理异常类型 checkConstantValueAnomaly(config, value, timeStr, state, out); // 1. 恒值检测 checkZeroValueAnomaly(config, value, timeStr, state, out); // 2. 零值检测 checkThresholdAnomaly(config, value, timeStr, state, out); // 3. 上阈值, 4. 下阈值 checkSyncAnomaly(config, value, timeStr, state, configCollection, ctx, out); // 5. 同步检测 stateMap.put(config.encode, state); } } @Override public void onTimer(long timestamp, OnTimerContext ctx, Collector<JSONObject> out) throws Exception { String tag = ctx.getCurrentKey(); Long lastEventTime = lastDataTimeMap.get(tag); ConfigCollection configCollection = getBroadcastConfig(ctx); if (configCollection == null) return; List<ULEParamConfig> configs = configCollection.tagToConfigs.get(tag); if (configs == null) return; // 检查所有需要离线检测的配置项 boolean hasOnlineConfig = false; for (ULEParamConfig config : configs) { if (config.isonline == 1) { hasOnlineConfig = true; AnomalyState state = getOrCreateState(config.encode); AnomalyStatus status = state.getStatus(6); // 核心修复:处理从未收到数据的情况 if (lastEventTime == null) { // 获取标签初始化时间 Long initTime = tagInitTimeState.get(tag); if (initTime == null) { // 如果状态中不存在,使用配置加载时间 initTime = configCollection.checkpointTime; tagInitTimeState.put(tag, initTime); } // 检查是否超过配置的duration long elapsed = System.currentTimeMillis() - initTime; long durationMs = config.duration * 60 * 1000; if (elapsed >= durationMs) { if (!status.reported) { String triggerTime = timeFormat.format(new Date()); reportAnomaly(6, 1, 0.0, triggerTime, config, out); status.reported = true; // 输出到侧输出流 ctx.output(OFFLINE_CHECK_TAG, String.format("离线异常(从未收到数据): tag=%s, encode=%s, 初始化时间=%s, 当前时间=%s, 时间差=%dms (阈值=%dms)", config.tag, config.encode, timeFormat.format(new Date(initTime)), triggerTime, elapsed, durationMs) ); System.out.println("检测到从未接收数据的离线异常: " + config.tag); } } } // 处理已有数据但超时的情况 else if (timestamp - lastEventTime >= config.duration * 60 * 1000) { if (!status.reported) { String triggerTime = timeFormat.format(new Date(timestamp)); reportAnomaly(6, 1, 0.0, triggerTime, config, out); status.reported = true; ctx.output(OFFLINE_CHECK_TAG, String.format("离线异常: tag=%s, encode=%s, 最后数据时间=%s, 超时时间=%s, 时间差=%dms (阈值=%dms)", config.tag, config.encode, timeFormat.format(new Date(lastEventTime)), triggerTime, timestamp - lastEventTime, config.duration * 60 * 1000) ); System.out.println("检测到数据中断的离线异常: " + config.tag); } } else { // 数据已恢复,重置状态 if (status.reported) { reportAnomaly(6, 0, 0.0, timeFormat.format(new Date()), config, out); status.reset(); System.out.println("离线状态恢复: " + config.tag); } } stateMap.put(config.encode, state); } } // 重新注册定时器(每分钟检查一次) if (hasOnlineConfig) { long newTimeout = timestamp + TimeUnit.MINUTES.toMillis(1); ctx.timerService().registerEventTimeTimer(newTimeout); offlineTimerState.put(tag, newTimeout); } else { offlineTimerState.remove(tag); } } // 恒值检测 - 异常类型1 private void checkConstantValueAnomaly(ULEParamConfig config, double currentValue, String timeStr, AnomalyState state, Collector<JSONObject> out) { if (config.constantvalue != 1) return; try { AnomalyStatus status = state.getStatus(1); long durationThreshold = config.duration * 60 * 1000; Date timestamp = timeFormat.parse(timeStr); if (status.lastValue == null) { status.lastValue = currentValue; status.lastChangeTime = timestamp; return; } if (Math.abs(currentValue - status.lastValue) > 0.001) { status.lastValue = currentValue; status.lastChangeTime = timestamp; if (status.reported) { reportAnomaly(1, 0, currentValue, timeStr, config, out); } status.reset(); return; } long elapsed = timestamp.getTime() - status.lastChangeTime.getTime(); if (elapsed > durationThreshold) { if (!status.reported) { reportAnomaly(1, 1, currentValue, timeStr, config, out); status.reported = true; } } } catch (Exception e) { System.err.println("恒值检测错误: " + config.encode + " - " + e.getMessage()); } } // 零值检测 - 异常类型2 private void checkZeroValueAnomaly(ULEParamConfig config, double currentValue, String timeStr, AnomalyState state, Collector<JSONObject> out) { if (config.iszero != 1) return; try { AnomalyStatus status = state.getStatus(2); Date timestamp = timeFormat.parse(timeStr); boolean isZero = Math.abs(currentValue) < 0.001; if (isZero) { if (status.startTime == null) { status.startTime = timestamp; } else if (!status.reported) { long elapsed = timestamp.getTime() - status.startTime.getTime(); if (elapsed >= config.duration * 60 * 1000) { reportAnomaly(2, 1, currentValue, timeStr, config, out); status.reported = true; } } } else { if (status.reported) { reportAnomaly(2, 0, currentValue, timeStr, config, out); status.reset(); } else if (status.startTime != null) { status.startTime = null; } } } catch (Exception e) { System.err.println("零值检测错误: " + config.encode + " - " + e.getMessage()); } } // 阈值检测 - 异常类型3(上阈值)和4(下阈值) private void checkThresholdAnomaly(ULEParamConfig config, double currentValue, String timeStr, AnomalyState state, Collector<JSONObject> out) { try { if (config.ishigh == 1) { AnomalyStatus highStatus = state.getStatus(3); processThresholdAnomaly(highStatus, currentValue, timeStr, currentValue > config.highthreshold, config, 3, out); } if (config.islow == 1) { AnomalyStatus lowStatus = state.getStatus(4); processThresholdAnomaly(lowStatus, currentValue, timeStr, currentValue < config.lowthreshold, config, 4, out); } } catch (Exception e) { System.err.println("阈值检测错误: " + config.encode + " - " + e.getMessage()); } } private void processThresholdAnomaly(AnomalyStatus status, double currentValue, String timeStr, boolean isAnomaly, ULEParamConfig config, int anomalyType, Collector<JSONObject> out) { try { Date timestamp = timeFormat.parse(timeStr); if (isAnomaly) { if (status.startTime == null) { status.startTime = timestamp; } else if (!status.reported) { long elapsed = timestamp.getTime() - status.startTime.getTime(); if (elapsed >= config.duration * 60 * 1000) { reportAnomaly(anomalyType, 1, currentValue, timeStr, config, out); status.reported = true; } } } else { if (status.reported) { reportAnomaly(anomalyType, 0, currentValue, timeStr, config, out); status.reset(); } else if (status.startTime != null) { status.startTime = null; } } } catch (Exception e) { System.err.println("阈值处理错误: " + config.encode + " - " + e.getMessage()); } } // 同步检测 - 异常类型5(优化日志输出) private void checkSyncAnomaly(ULEParamConfig config, double currentValue, String timeStr, AnomalyState state, ConfigCollection configCollection, ReadOnlyContext ctx, Collector<JSONObject> out) { if (config.issync != 1 || config.syncparaencode == null || config.syncparaencode.isEmpty()) { return; } try { // 通过encode获取关联配置 ULEParamConfig relatedConfig = configCollection.encodeToConfig.get(config.syncparaencode); if (relatedConfig == null) { if (System.currentTimeMillis() - lastSyncLogTime > 60000) { System.out.println("同步检测错误: 未找到关联配置, encode=" + config.syncparaencode); lastSyncLogTime = System.currentTimeMillis(); } return; } // 获取关联配置的tag String relatedTag = relatedConfig.tag; if (relatedTag == null || relatedTag.isEmpty()) { if (System.currentTimeMillis() - lastSyncLogTime > 60000) { System.out.println("同步检测错误: 关联配置没有tag, encode=" + config.syncparaencode); lastSyncLogTime = System.currentTimeMillis(); } return; } // 从广播状态获取关联值 ReadOnlyBroadcastState<String, Double> tagValuesState = ctx.getBroadcastState(Descriptors.tagValuesDescriptor); Double relatedValue = tagValuesState.get(relatedTag); if (relatedValue == null) { if (System.currentTimeMillis() - lastSyncLogTime > 60000) { // 优化日志:添加当前标签信息 System.out.printf("同步检测警告: 关联值未初始化 [主标签=%s(%s), 关联标签=%s(%s)]%n", config.tag, config.encode, relatedTag, config.syncparaencode); lastSyncLogTime = System.currentTimeMillis(); } return; } // 同步检测逻辑 AnomalyStatus status = state.getStatus(5); Date timestamp = timeFormat.parse(timeStr); // 业务逻辑:当前值接近1且关联值接近0时异常 boolean isAnomaly = (currentValue >= 0.99) && (Math.abs(relatedValue) < 0.01); // 日志记录 if (System.currentTimeMillis() - lastSyncLogTime > 60000) { System.out.printf("同步检测: %s (%.4f) vs %s (%.4f) -> %b%n", config.tag, currentValue, relatedTag, relatedValue, isAnomaly); lastSyncLogTime = System.currentTimeMillis(); } // 处理异常状态 if (isAnomaly) { if (status.startTime == null) { status.startTime = timestamp; } else if (!status.reported) { long elapsed = timestamp.getTime() - status.startTime.getTime(); if (elapsed >= config.duration * 60 * 1000) { reportAnomaly(5, 1, currentValue, timeStr, config, out); status.reported = true; } } } else { if (status.reported) { reportAnomaly(5, 0, currentValue, timeStr, config, out); status.reset(); } else if (status.startTime != null) { status.startTime = null; } } } catch (ParseException e) { System.err.println("同步检测时间解析错误: " + config.encode + " - " + e.getMessage()); } catch (Exception e) { System.err.println("同步检测错误: " + config.encode + " - " + e.getMessage()); } } // 报告异常(添加详细日志) private void reportAnomaly(int anomalyType, int statusFlag, double value, String time, ULEParamConfig config, Collector<JSONObject> out) { JSONObject event = new JSONObject(); event.put("tag", config.tag); event.put("paracode", config.encode); event.put("abnormaltype", anomalyType); event.put("statusflag", statusFlag); event.put("datavalue", value); event.put("triggertime", time); out.collect(event); // 添加详细日志输出 String statusDesc = statusFlag == 1 ? "异常开始" : "异常结束"; System.out.printf("报告异常: 类型=%d, 状态=%s, 标签=%s, 编码=%s, 时间=%s%n", anomalyType, statusDesc, config.tag, config.encode, time); } @Override public void processBroadcastElement(Object broadcastElement, Context ctx, Collector<JSONObject> out) throws Exception { // 处理配置更新 if (broadcastElement instanceof ConfigCollection) { ConfigCollection newConfig = (ConfigCollection) broadcastElement; BroadcastState<Void, ConfigCollection> configState = ctx.getBroadcastState(Descriptors.configStateDescriptor); // 获取旧配置 ConfigCollection oldConfig = configState.get(null); // 处理配置变更:清理不再启用的报警 if (oldConfig != null) { for (Map.Entry<String, ULEParamConfig> entry : oldConfig.encodeToConfig.entrySet()) { String encode = entry.getKey(); ULEParamConfig oldCfg = entry.getValue(); ULEParamConfig newCfg = newConfig.encodeToConfig.get(encode); if (newCfg == null || !isAlarmEnabled(newCfg, oldCfg)) { // 发送恢复事件 sendRecoveryEvents(encode, oldCfg, ctx, out); } } } // 更新广播状态 configState.put(null, newConfig); System.out.println("广播配置更新完成, 配置项: " + newConfig.encodeToConfig.size()); // 新增:初始化所有需要在线检测的标签 for (String tag : newConfig.allTags) { List<ULEParamConfig> configs = newConfig.tagToConfigs.get(tag); if (configs != null) { for (ULEParamConfig config : configs) { if (config.isonline == 1) { // 初始化标签检测时间 if (!tagInitTimeState.contains(tag)) { long initTime = System.currentTimeMillis(); tagInitTimeState.put(tag, initTime); System.out.println("初始化在线检测: tag=" + tag + ", 时间=" + timeFormat.format(new Date(initTime))); } } } } } } // 处理标签值更新 else if (broadcastElement instanceof Map) { @SuppressWarnings("unchecked") Map<String, Object> tagValue = (Map<String, Object>) broadcastElement; String tag = (String) tagValue.get("tag"); Double value = (Double) tagValue.get("value"); if (tag != null && value != null) { BroadcastState<String, Double> tagValuesState = ctx.getBroadcastState(Descriptors.tagValuesDescriptor); tagValuesState.put(tag, value); } } } // 检查报警是否启用 private boolean isAlarmEnabled(ULEParamConfig newCfg, ULEParamConfig oldCfg) { return (oldCfg.constantvalue == 1 && newCfg.constantvalue == 1) || (oldCfg.iszero == 1 && newCfg.iszero == 1) || (oldCfg.ishigh == 1 && newCfg.ishigh == 1) || (oldCfg.islow == 1 && newCfg.islow == 1) || (oldCfg.isonline == 1 && newCfg.isonline == 1) || (oldCfg.issync == 1 && newCfg.issync == 1); } // 发送恢复事件 private void sendRecoveryEvents(String encode, ULEParamConfig config, Context ctx, Collector<JSONObject> out) { try { AnomalyState state = stateMap.get(encode); if (state == null) return; // 遍历所有可能的报警类型 for (int type = 1; type <= 6; type++) { AnomalyStatus status = state.getStatus(type); if (status.reported) { JSONObject recoveryEvent = new JSONObject(); recoveryEvent.put("tag", config.tag); recoveryEvent.put("paracode", config.encode); recoveryEvent.put("abnormaltype", type); recoveryEvent.put("statusflag", 0); // 恢复事件 recoveryEvent.put("datavalue", 0.0); recoveryEvent.put("triggertime", timeFormat.format(new Date())); out.collect(recoveryEvent); System.out.println("发送恢复事件: 类型=" + type + ", 标签=" + config.tag); status.reset(); } } // 更新状态 stateMap.put(encode, state); } catch (Exception e) { System.err.println("发送恢复事件失败: " + e.getMessage()); } } // 辅助方法 private ConfigCollection getBroadcastConfig(ReadOnlyContext ctx) throws Exception { return ctx.getBroadcastState(Descriptors.configStateDescriptor).get(null); } private AnomalyState getOrCreateState(String encode) throws Exception { AnomalyState state = stateMap.get(encode); if (state == null) { state = new AnomalyState(); } return state; } } // 异常状态类 public static class AnomalyState implements Serializable { private static final long serialVersionUID = 1L; private final Map<Integer, AnomalyStatus> statusMap = new HashMap<>(); public AnomalyStatus getStatus(int type) { return statusMap.computeIfAbsent(type, k -> new AnomalyStatus()); } } // 异常状态详情 public static class AnomalyStatus implements Serializable { private static final long serialVersionUID = 1L; public Date startTime; // 异常开始时间 public Double lastValue; // 用于恒值检测 public Date lastChangeTime; // 值最后变化时间 public boolean reported; // 是否已报告 public void reset() { startTime = null; lastValue = null; lastChangeTime = null; reported = false; } } } 运行时日志为:"C:\Program Files (x86)\Java\jdk1.8.0_102\bin\java.exe" -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:21096,suspend=y,server=n -javaagent:C:\Users\Administrator\AppData\Local\JetBrains\IntelliJIdea2021.2\captureAgent\debugger-agent.jar=file:/C:/Users/Administrator/AppData/Local/Temp/capture.props -Dfile.encoding=UTF-8 -classpath C:\Users\Administrator\AppData\Local\Temp\classpath849000959.jar com.tongchuang.realtime.mds.ULEDataanomalyanalysis 已连接到目标 VM, 地址: ''127.0.0.1:21096',传输: '套接字'' SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/F:/flink/flinkmaven/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.10.0/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/F:/flink/flinkmaven/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 加载配置: 30 个参数 配置加载完成,检查点时间: Wed Aug 06 07:55:06 CST 2025 广播配置更新完成, 配置项: 30 Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed. at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) at org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081) at akka.dispatch.OnComplete.internal(Future.scala:264) at akka.dispatch.OnComplete.internal(Future.scala:261) at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) at scala.concurrent.impl.CallbackRunnable.run$$$capture(Promise.scala:60) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala) at org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573) at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) at scala.concurrent.impl.CallbackRunnable.run$$$capture(Promise.scala:60) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala) at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81) at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44) at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138) at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82) at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:216) at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:206) at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:197) at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:682) at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79) at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:435) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305) at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212) at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) at akka.actor.Actor.aroundReceive(Actor.scala:517) at akka.actor.Actor.aroundReceive$(Actor.scala:515) at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) at akka.actor.ActorCell.receiveMessage$$$capture(ActorCell.scala:592) at akka.actor.ActorCell.receiveMessage(ActorCell.scala) at akka.actor.ActorCell.invoke(ActorCell.scala:561) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) at akka.dispatch.Mailbox.run(Mailbox.scala:225) at akka.dispatch.Mailbox.exec(Mailbox.scala:235) ... 4 more Caused by: java.lang.NullPointerException: No key set. This method should not be called outside of a keyed context. at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:76) at org.apache.flink.runtime.state.heap.StateTable.checkKeyNamespacePreconditions(StateTable.java:270) at org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:260) at org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:143) at org.apache.flink.runtime.state.heap.HeapMapState.get(HeapMapState.java:86) at org.apache.flink.runtime.state.ttl.TtlMapState.lambda$getWrapped$0(TtlMapState.java:64) at org.apache.flink.runtime.state.ttl.AbstractTtlDecorator.getWrappedWithTtlCheckAndUpdate(AbstractTtlDecorator.java:97) at org.apache.flink.runtime.state.ttl.TtlMapState.getWrapped(TtlMapState.java:63) at org.apache.flink.runtime.state.ttl.TtlMapState.contains(TtlMapState.java:96) at org.apache.flink.runtime.state.UserFacingMapState.contains(UserFacingMapState.java:72) at com.tongchuang.realtime.mds.ULEDataanomalyanalysis$OptimizedAnomalyDetectionFunction.processBroadcastElement(ULEDataanomalyanalysis.java:767) at org.apache.flink.streaming.api.operators.co.CoBroadcastWithKeyedOperator.processElement2(CoBroadcastWithKeyedOperator.java:133) at org.apache.flink.streaming.runtime.io.StreamTwoInputProcessorFactory.processRecord2(StreamTwoInputProcessorFactory.java:221) at org.apache.flink.streaming.runtime.io.StreamTwoInputProcessorFactory.lambda$create$1(StreamTwoInputProcessorFactory.java:190) at org.apache.flink.streaming.runtime.io.StreamTwoInputProcessorFactory$StreamTaskNetworkOutput.emitRecord(StreamTwoInputProcessorFactory.java:291) at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134) at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66) at org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:98) at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:423) at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:204) at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:684) at org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:639) at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:650) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:623) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) at java.lang.Thread.run(Thread.java:745) 与目标 VM 断开连接, 地址为: ''127.0.0.1:21096',传输: '套接字'' 进程已结束,退出代码为 1 请生成完整修复后代码。
08-07
基于实时迭代的数值鲁棒NMPC双模稳定预测模型(Matlab代码实现)内容概要:本文介绍了基于实时迭代的数值鲁棒非线性模型预测控制(NMPC)双模稳定预测模型的研究与Matlab代码实现,重点在于提升系统在存在不确定性与扰动情况下的控制性能与稳定性。该模型结合实时迭代优化机制,增强了传统NMPC的数值鲁棒性,并通过双模控制策略兼顾动态响应与稳态精度,适用于复杂非线性系统的预测控制问题。文中还列举了多个相关技术方向的应用案例,涵盖电力系统、路径规划、信号处理、机器学习等多个领域,展示了该方法的广泛适用性与工程价值。; 适合人群:具备一定控制理论基础和Matlab编程能力,从事自动化、电气工程、智能制造、机器人控制等领域研究的研究生、科研人员及工程技术人员。; 使用场景及目标:①应用于非线性系统的高性能预测控制设计,如电力系统调度、无人机控制、机器人轨迹跟踪等;②解决存在模型不确定性、外部扰动下的系统稳定控制问题;③通过Matlab仿真验证控制算法的有效性与鲁棒性,支撑科研论文复现与工程原型开发。; 阅读建议:建议读者结合提供的Matlab代码进行实践,重点关注NMPC的实时迭代机制与双模切换逻辑的设计细节,同时参考文中列举的相关研究方向拓展应用场景,强化对数值鲁棒性与系统稳定性之间平衡的理解。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值