head first Thread.join()

本文通过具体实例详细介绍了Thread.join()方法的使用方式及其内部实现原理。包括如何确保主线程等待子线程完成、join(long millis)的超时机制以及isAlive()方法的应用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

友情推荐:

  1. 线程池原理
  2. 深入Thread.sleep
  3. 多线程中断机制

不使用Thread.join() 测试线程

先上代码:

/**
 * Created by Zero on 2017/8/23.
 */
public class TestJoin implements Runnable {
    public static int a = 0;

    @Override
    public void run() {
        for (int i = 0; i < 5; i++) {
            a = a + 1;
        }
    }

    public static void main(String[] args) throws InterruptedException {
        TestJoin j = new TestJoin();
        Thread thread = new Thread(j);
        thread.start();
        System.out.println(a);
    }
}

以上示例会输出5吗?可能性不大,有可能永远输出为0,之前在线程池原理的那篇就提到过,线程的启动和销毁都需要时间,此处因为thread还没启动好,或者正在为它分配资源准备运行,就已经执行完输出了。

怎样才能确保每次都能输出5呢?现在有请我们的主角join方法闪亮登场,代码如下:

/**
 * Created by apple on 2017/8/23.
 */
public class TestJoin implements Runnable {
    public static int a = 0;

    @Override
    public void run() {
        for (int i = 0; i < 5; i++) {
            a = a + 1;
        }
    }

    public static void main(String[] args) throws InterruptedException {
        TestJoin j = new TestJoin();
        Thread thread = new Thread(j);
        thread.start();
        /**
         * 测试join方法的作用,与下面的threadAgain线程作对比。
         */
        thread.join();
        System.out.println(a);
        a = 0;
        Thread threadAgain = new Thread(j);
        threadAgain.start();
        System.out.println(a);
    }
}

输出的结果将是5和0。

Thread.join()作用

Thread.join(),之前看资料的时候,有些人说可以理解成“将两个线程合并成一个线程”,我是觉得这样说是很不科学的,虽然这样通俗易懂,但这确实是两个不同的线程,只是在调用Thread.join()后,会先执行完Thread线程后再去执行当前线程,即上述的在主线程中执行到thread.join();后,先去执行thread,直到thread执行完后再去执行主线程。

测试Thread.join(long millis)

/**
 * Created by apple on 2017/8/23.
 */
public class TestJoin implements Runnable {
    public static int a = 0;

    @Override
    public void run() {
        for (int i = 0; i < 5; i++) {
            a = a + 1;
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }

    public static void main(String[] args) throws InterruptedException {
        TestJoin j = new TestJoin();
        Thread thread = new Thread(j);
        thread.start();
        /**
         * 测试join方法的作用
         */
        thread.join(3000);
        System.out.println("thread线程结果为:"+a);
        a = 0;
        Thread threadAgain = new Thread(j);
        threadAgain.start();
        System.out.println("threadAgain线程结果为:"+a);
    }
}

输出:

thread线程结果为:3
threadAgain线程结果为:0

先上一段源码再来分析:

/**
     * Waits at most {@code millis} milliseconds for this thread to
     * die. A timeout of {@code 0} means to wait forever.
     *
     * <p> This implementation uses a loop of {@code this.wait} calls
     * conditioned on {@code this.isAlive}. As a thread terminates the
     * {@code this.notifyAll} method is invoked. It is recommended that
     * applications not use {@code wait}, {@code notify}, or
     * {@code notifyAll} on {@code Thread} instances.
     *
     * @param  millis
     *         the time to wait in milliseconds
     *
     * @throws  IllegalArgumentException
     *          if the value of {@code millis} is negative
     *
     * @throws  InterruptedException
     *          if any thread has interrupted the current thread. The
     *          <i>interrupted status</i> of the current thread is
     *          cleared when this exception is thrown.
     */
    public final synchronized void join(long millis)
    throws InterruptedException {
        long base = System.currentTimeMillis();
        long now = 0;

        if (millis < 0) {
            throw new IllegalArgumentException("timeout value is negative");
        }

        if (millis == 0) {
            while (isAlive()) {
                wait(0);
            }
        } else {
            while (isAlive()) {
                long delay = millis - now;
                if (delay <= 0) {
                    break;
                }
                wait(delay);
                now = System.currentTimeMillis() - base;
            }
        }
    }

这里写图片描述

源码爸爸说了,孩子,我给你millis这么长的时间,能不能完成任务那是你的事情了,能提前完成,咱就提前走下去,不能完成,过期不候,自己看着办吧。

默认情况下,Thread.join()即Thread.join(0),当为0的时候,那才叫真爱呢,线程会一直等下去,知道执行结束为止,才会继续朝下执行。

isAlive():用来测试线程是否处于活动状态,相当于 run 是否还在执行。

微信扫我^_^

这里写图片描述

import json_repair import numpy as np import torch import zmq from typing import Optional import threading import cv2 import time from agent.mask_client import FirstMaskClient, TrackingMaskClient from agent.schema import RobotRequestType import agent.config as agent_config from agent.utils import ( dict_apply, interpolate_image_batch, interpolate_mask_batch, image_to_base64, ) import logging def log_init(): logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) format = ( "[%(asctime)s %(levelname)s %(filename)s %(funcName)s:%(lineno)d] %(message)s" ) # handler = logging.StreamHandler(stdin) handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) formatter = logging.Formatter(format, datefmt="%Y-%m-%d %H:%M:%S") handler.setFormatter(formatter) logger.addHandler(handler) logger.info("Logger inited") return logger logger = log_init() class RobotProprioception: """发送一个占位的字节,0 代表请求。接受数据格式为 (head_image, wrist_image, joint_angle) head_image: np.ndarray of shape (480, 640, 3), of dtype np.uint8 wrist_image: np.ndarray of shape (360, 480, 3), of dtype np.uint8 joint_angle: np.ndarray of shape (6,), of dtype np.float32 """ HEAD_IMAGE_SHAPE = (480, 640, 3) HEAD_IMAGE_SIZE = HEAD_IMAGE_SHAPE[0] * HEAD_IMAGE_SHAPE[1] * HEAD_IMAGE_SHAPE[2] WRIST_IMAGE_SHAPE = (360, 480, 3) WRIST_IMAGE_SIZE = ( WRIST_IMAGE_SHAPE[0] * WRIST_IMAGE_SHAPE[1] * WRIST_IMAGE_SHAPE[2] ) STATE_SHAPE = (6,) STATE_SIZE = STATE_SHAPE[0] * 4 # float32 requires 4 bytes CHUNK_SIZE = HEAD_IMAGE_SIZE + WRIST_IMAGE_SIZE + STATE_SIZE REQUEST_SIZE = 1 def __init__(self): # (H, W, 3) self.head_image: Optional[np.ndarray] = None self.wrist_image: Optional[np.ndarray] = None self.state: Optional[np.ndarray] = None # (2, 2) self.bbox_2d: Optional[np.ndarray] = None # 当前仅考虑识别一个物体 self.label: Optional[str] = None self.first_mask_client: Optional[FirstMaskClient] = None self.first_mask: Optional[np.ndarray] = None self.tracking_mask_client: Optional[TrackingMaskClient] = None @property def base64_head_image(self) -> str: return image_to_base64(self.head_image) def _parse_json(self, json_output: str) -> str: lines = json_output.splitlines() for i, line in enumerate(lines): if line == "```json": json_output = "\n".join(lines[i + 1 :]) json_output = json_output.split("```")[0] break return json_output def parse_bounding_box_from_vlm_response( self, json_response: str, ) -> bool: json_response = self._parse_json(json_response) bbox_data = json_repair.loads(json_response) if "bbox_2d" not in bbox_data or "label" not in bbox_data: return False bbox_2d, label = bbox_data["bbox_2d"], bbox_data["label"] if len(bbox_2d) != 4: return False self.bbox_2d = np.array(bbox_2d).reshape((2, 2)).astype(np.uint16) self.label = label return True def get_first_mask(self) -> Optional[np.ndarray]: """根据 boundingbox 获取第一张掩码 Returns: Optional[np.ndarray]: of shape (H, W) """ if self.first_mask_client is None: self.first_mask_client = FirstMaskClient() assert self.head_image is not None and self.head_image.ndim == 3, ( "ERROR: Invalid head image" ) assert self.bbox_2d is not None and self.bbox_2d.shape == (2, 2), ( "ERROR: Invalid bbox_2d" ) self.first_mask = self.first_mask_client.get_mask(self.head_image, self.bbox_2d) return self.first_mask def get_tracking_mask(self) -> Optional[np.ndarray]: """如果设置了 first_mask,代表初始化场景。随后开始跟踪 mask Returns: Optional[np.ndarray]: of shape (H, W) """ assert self.head_image is not None and self.head_image.ndim == 3, ( "ERROR: Invalid head image" ) if self.tracking_mask_client is None: self.tracking_mask_client = TrackingMaskClient() mask = self.tracking_mask_client.get_mask(self.head_image, self.first_mask) if self.first_mask is not None: self.first_mask = None return mask class RobotConnection: def __init__(self, **kwargs): # self.host = kwargs.pop("host", "tcp://localhost") self.host = kwargs.pop("host", "tcp://192.168.196.242") self.port = kwargs.pop("port", 15557) self.addr = f"{self.host}:{self.port}" self.timeout = kwargs.pop("timeout", 8000) self.max_steps = kwargs.pop("max_steps", 5) self.is_connected = False def _ensure_connected(self): if not self.is_connected: self._connect() logging.info(f"Robot service connected to server at port {self.port}") def _connect(self): self.context = zmq.Context() # 新建上下文 self.socket = self.context.socket(zmq.REQ) # 新建套接字 self.socket.setsockopt(zmq.RCVTIMEO, self.timeout) # 2秒超时 self.socket.connect(self.addr) self.is_connected = True def _close(self): # if self.context is not None: # self.context.term() logging.info("Context terminated") if self.socket is not None: self.socket.close() logging.info("Socket closed") self.is_connected = False def send_action(self, action: np.ndarray) -> bool: fired = False for _ in range(self.max_steps): try: self._ensure_connected() if not self.is_connected: logging.info("ERROR: Failed to connect to robot service") # failed to connect return False logging.info("send action") self.socket.send(action.tobytes()) message = self.socket.recv(copy=False) if len(message) != 1: logging.info( "ERROR: Invalid message size, required {self.CHUNK_SIZE} bytes" ) continue ack = np.frombuffer(message.buffer, dtype=np.uint8).reshape((1,)) if ack[0] != 0: logging.info(f"Invalid ACK: {ack[0]}") continue fired = True logging.info(f"Recerved ACK: {len(message)}") logging.info("Fired action") break except zmq.Again: logging.error("Timeout") self._close() if not fired: logging.info("ERROR: Failed to fire action to server") self._close() return False return True def get_proprioception(self) -> Optional[dict]: head_image = None wrist_image = None joint_angle = None for _ in range(self.max_steps): try: self._ensure_connected() if not self.is_connected: logging.info("ERROR: Failed to connect to robot service") # failed to connect return False logging.info("send proprioception request") self.socket.send(RobotRequestType.REQUEST.tobytes()) message = self.socket.recv(copy=False) logging.info(f"recerved msg size: {len(message)}") if len(message) != RobotProprioception.CHUNK_SIZE: logging.info( f"ERROR: Invalid message size, required {RobotProprioception.CHUNK_SIZE} bytes" ) continue head_image = np.frombuffer( message.buffer[: RobotProprioception.HEAD_IMAGE_SIZE], dtype=np.uint8, ).reshape(RobotProprioception.HEAD_IMAGE_SHAPE) wrist_image = np.frombuffer( message.buffer[ RobotProprioception.HEAD_IMAGE_SIZE : RobotProprioception.HEAD_IMAGE_SIZE + RobotProprioception.WRIST_IMAGE_SIZE ], dtype=np.uint8, ).reshape(RobotProprioception.WRIST_IMAGE_SHAPE) joint_angle = np.frombuffer( message.buffer[-RobotProprioception.STATE_SIZE :], dtype=np.float32, ).reshape(RobotProprioception.STATE_SHAPE) logging.info("received head_image, wrist_image, and joint_angle") break except zmq.Again: logging.info("ERROR: Timeout") self._close() if head_image is None or wrist_image is None or joint_angle is None: logging.info("ERROR: Failed to retrieve image from server") return None logging.info("Robot proprioception received") result = { "head_image": head_image, "wrist_image": wrist_image, "state": joint_angle, } return result class Robot: """需要先更新观察,再执行动作""" def __init__(self, config=None): config = config or agent_config.config.robot self.proprioception = RobotProprioception() # obs_dict = { # 'rgbm': (B,T,4,H,W), # Head camera RGBM image # 'right_cam_img': (B,T,3,H,W), # Wrist camera RGB image # 'right_state': (B,T,6) # Robot arm state # } self.action_predictor = config.controller self.device = config.device self.action_predictor.to(self.device) self.mask = None self.connection = RobotConnection() self.monitor_head_image_worker = threading.Thread( target=self.monitor_head_image ) self.monitor_mask_worker = threading.Thread(target=self.monitor_mask) self.monitor_head_image_worker.start() self.monitor_mask_worker.start() def monitor_head_image(self) -> None: while True: # logging.info(f"monitor head image {self.head_image}") if self.head_image is not None: cv2.imshow("head_image", self.head_image) cv2.waitKey(0) time.sleep(0.25) def monitor_mask(self) -> None: while True: # logging.info(f"monitor mask {self.mask}") if self.mask is not None: cv2.imshow("mask", self.mask) cv2.waitKey(0) time.sleep(0.25) @property def base64_head_image(self) -> str: return self.proprioception.base64_head_image @property def head_image(self) -> np.ndarray: return self.proprioception.head_image @head_image.setter def head_image(self, image: np.ndarray) -> None: self.proprioception.head_image = image @property def wrist_image(self) -> np.ndarray: return self.proprioception.wrist_image @wrist_image.setter def wrist_image(self, image: np.ndarray) -> None: self.proprioception.wrist_image = image @property def state(self) -> np.ndarray: return self.proprioception.state @state.setter def state(self, state: np.ndarray) -> None: self.proprioception.state = state @property def bbox_2d(self) -> np.ndarray: return self.proprioception.bbox_2d def parse_bounding_box_from_vlm_response_and_set_first_mask( self, json_response: str ) -> bool: parsed_ok = self.proprioception.parse_bounding_box_from_vlm_response( json_response ) if not parsed_ok: return False first_mask = self.proprioception.get_first_mask() return first_mask is not None def update_proprioception(self) -> None: proprioception = self.connection.get_proprioception() if proprioception is None: logging.error("Failed to retrieve image from server") return self.head_image = proprioception.get("head_image", None) self.wrist_image = proprioception.get("wrist_image", None) self.state = proprioception.get("state", None) logging.info("Robot proprioception updated") def act(self) -> bool: observation = self.track_target_and_get_formatted_observation() obs = dict_apply(observation, lambda x: x.to(self.action_predictor.device)) # (B, 64, 6) actions = self.action_predictor.predict_action(obs_dict=obs) action = actions[0, 0].detach().cpu().numpy() return self.connection.send_action(action) def track_target_and_get_formatted_observation( self, ) -> dict: """根据观察,生成 controller 需要的格式。注意,必须要先调用 parse_bounding_box_and_set_first_mask() 方法, 有了场景上下文才能开始获取观察值。 Returns: dict { "rgbm" (torch.Tensor): (B, T, 4, H, W) "right_cam_img" (torch.Tensor): (B, T, 3, H, W) "right_state" (torch.Tensor): (B, T, 6) } Returns: dict: 格式化的字典,满足 """ assert self.head_image is not None and self.head_image.ndim == 3 assert self.wrist_image is not None and self.wrist_image.ndim == 3 assert self.state is not None and self.state.ndim == 1 mask = self.proprioception.get_tracking_mask() self.mask = mask obs = dict() # self.head_image of shape (H, W, 3) rgb_torch = interpolate_image_batch(self.head_image[None, ...]) # mask of shape (H, W) mask_torch = interpolate_mask_batch(mask[None, ...]) obs["rgbm"] = torch.cat([rgb_torch, mask_torch], dim=1).unsqueeze(0) # self.wrist_image of shape (H, W, 3) obs["right_cam_img"] = interpolate_image_batch( self.wrist_image[None, ...] ).unsqueeze(0) obs["right_state"] = torch.from_numpy(self.state[None, None, ...]) return obs def step(self) -> np.ndarray: self.update_proprioception() self.act() 全文注释
07-17
from flask import Flask, render_template, redirect, url_for, request, flash import paho.mqtt.client as mqtt import json from threading import Thread from flask_sqlalchemy import SQLAlchemy from flask_login import LoginManager, UserMixin, login_user, login_required, logout_user, current_user from werkzeug.security import generate_password_hash, check_password_hash from flask_socketio import SocketIO from datetime import datetime from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes from cryptography.hazmat.primitives import padding from cryptography.hazmat.backends import default_backend import base64 from flask import request, jsonify from vosk import Model, KaldiRecognizer import wave import os from paddleocr import PaddleOCR from paddlehub.module.module import Module import cv2 # 初始化 Flask 和扩展 app = Flask(__name__) socketio = SocketIO(app) # 初始化 Flask-SocketIO app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///iot_130.db' app.config['SECRET_KEY'] = 'your_secret_key' app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db = SQLAlchemy(app) login_manager = LoginManager(app) login_manager.login_view = 'login' # AES 配置 SECRET_KEY = b"your_secret_key".ljust(32, b'\0') # AES-256密钥(32字节) IV = b"16_byte_iv_12345" # 16字节的初始化向量 # AES解密函数 def decrypt_data(encrypted_data, key): backend = default_backend() cipher = Cipher(algorithms.AES(key), modes.CBC(IV), backend=backend) decryptor = cipher.decryptor() unpadder = padding.PKCS7(128).unpadder() decrypted = decryptor.update(base64.b64decode(encrypted_data)) + decryptor.finalize() unpadded_data = unpadder.update(decrypted) + unpadder.finalize() return unpadded_data.decode() # AES加密函数 def encrypt_data(data, key): backend = default_backend() cipher = Cipher(algorithms.AES(key), modes.CBC(IV), backend=backend) encryptor = cipher.encryptor() padder = padding.PKCS7(128).padder() padded_data = padder.update(data) + padder.finalize() encrypted = encryptor.update(padded_data) + encryptor.finalize() return base64.b64encode(encrypted).decode() # User 表 class User(UserMixin, db.Model): __tablename__ = 'User' id = db.Column(db.Integer, primary_key=True, autoincrement=True) username = db.Column(db.String(150), unique=True, nullable=False) password = db.Column(db.String(150), nullable=False) role = db.Column(db.String(50), default='user') # Device 表 class Device(db.Model): __tablename__ = 'Device' id = db.Column(db.Integer, primary_key=True, autoincrement=True) name = db.Column(db.String(150), nullable=False) type = db.Column(db.String(150), nullable=False) status = db.Column(db.String(50), default='offline') last_seen = db.Column(db.DateTime, default=None) # SensorData 表 class SensorData(db.Model): __tablename__ = 'SensorData' id = db.Column(db.Integer, primary_key=True, autoincrement=True) device_id = db.Column(db.Integer, db.ForeignKey('Device.id'), nullable=False) value = db.Column(db.Float, nullable=False) timestamp = db.Column(db.DateTime, default=datetime.utcnow) # Command 表 class Command(db.Model): __tablename__ = 'Command' id = db.Column(db.Integer, primary_key=True, autoincrement=True) device_id = db.Column(db.Integer, db.ForeignKey('Device.id'), nullable=False) command = db.Column(db.String(150), nullable=False) status = db.Column(db.String(50), default='pending') timestamp = db.Column(db.DateTime, default=datetime.utcnow) # 初始化数据库 with app.app_context(): db.create_all() @login_manager.user_loader def load_user(user_id): return User.query.get(int(user_id)) @app.route('/register', methods=['GET', 'POST']) def register(): if request.method == 'POST': username = request.form['username'] password = request.form['password'] hashed_password = generate_password_hash(password) # 检查用户名是否已存在 if User.query.filter_by(username=username).first(): flash('用户名已存在!') return redirect(url_for('register')) # 创建新用户 new_user = User(username=username, password=hashed_password) db.session.add(new_user) db.session.commit() flash('注册成功!请登录。') return redirect(url_for('login')) return render_template('register.html') @app.route('/login', methods=['GET', 'POST']) def login(): if request.method == 'POST': username = request.form['username'] password = request.form['password'] user = User.query.filter_by(username=username).first() if user and check_password_hash(user.password, password): login_user(user) return redirect(url_for('index')) flash('用户名或密码错误!') return render_template('login.html') @app.route('/logout') @login_required def logout(): logout_user() return redirect(url_for('login')) # 上传页面 @app.route('/upload', methods=['GET', 'POST']) @login_required def upload(): if request.method == 'POST': # 检查是否有文件上传 if 'file' not in request.files: flash('没有选择文件!') return redirect(request.url) file = request.files['file'] if file.filename == '': flash('没有选择文件!') return redirect(request.url) # 保存文件到 wav 目录 if file and file.filename.endswith('.wav'): filepath = os.path.join('wav', file.filename) file.save(filepath) # 使用 Vosk 模型进行语音识别 text = transcribe_audio(filepath) # 返回识别结果 return render_template('upload.html', text=text) return render_template('upload.html', text=None) @app.route('/image_upload', methods=['GET', 'POST']) @login_required def image_upload(): if request.method == 'POST': if 'image' not in request.files: flash('没有选择图像文件!') return redirect(request.url) image_file = request.files['image'] if image_file.filename == '': flash('没有选择图像文件!') return redirect(request.url) if image_file and image_file.filename.lower().endswith(('.png', '.jpg', '.jpeg')): image_path = os.path.join('static/uploads/images', image_file.filename) image_file.save(image_path) # 使用 PP-OCRv5 模型进行文字识别和图像检测 recognized_text = recognize_text(image_path) return render_template('image_upload.html', text=recognized_text) return render_template('image_upload.html', text=None) @app.route('/video_upload', methods=['GET', 'POST']) @login_required def video_upload(): if request.method == 'POST': if 'video' not in request.files: flash('没有选择视频文件!') return redirect(request.url) video_file = request.files['video'] if video_file.filename == '': flash('没有选择视频文件!') return redirect(request.url) if video_file and video_file.filename.lower().endswith(('.mp4', '.avi', '.mov')): video_path = os.path.join('static/uploads/videos', video_file.filename) video_file.save(video_path) # 使用 PaddleHub 模型进行宠物分类 classification_result = classify_pets_in_video(video_path) return render_template('video_upload.html', result=classification_result) return render_template('video_upload.html', result=None) def classify_pets_in_video(video_path): """使用 PaddleHub 模型对视频中的宠物进行分类""" try: # 加载 PaddleHub 的宠物分类模型 module = Module(name="resnet50_vd_animals") except Exception as e: print(f"模型加载失败: {e}") return # 打开视频文件 cap = cv2.VideoCapture(video_path) if not cap.isOpened(): print(f"无法打开视频文件: {video_path}") return frame_count = 0 results = [] while cap.isOpened(): ret, frame = cap.read() if not ret: break # 每隔一定帧数进行分类 if frame_count % 30 == 0: # 每30帧处理一次 print(f"正在处理第 {frame_count} 帧...") try: # 转换帧为 RGB 格式(PaddleHub 模型需要 RGB 格式) frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # 使用 PaddleHub 模型进行分类 result = module.classification(images=[frame_rgb]) results.append(result) except Exception as e: print(f"处理第 {frame_count} 帧时出错: {e}") frame_count += 1 cap.release() print("视频处理完成!") return results def recognize_text(image_path): """使用 PP-OCRv5 模型识别图像中的文字并检测图像""" ocr = PaddleOCR( text_detection_model_name="PP-OCRv5_mobile_det", text_recognition_model_name="PP-OCRv5_mobile_rec", use_doc_orientation_classify=False, use_doc_unwarping=False, use_textline_orientation=False, ) result = ocr.predict(image_path) print("result:", result) # 打印结果以调试 return result[0]['rec_texts'] def transcribe_audio(filepath): """使用 Vosk 模型将音频转换为文本""" model_path = "models/vosk-model-small-cn-0.22" if not os.path.exists(model_path): raise FileNotFoundError("Vosk 模型未找到,请检查路径!") model = Model(model_path) wf = wave.open(filepath, "rb") rec = KaldiRecognizer(model, wf.getframerate()) result_text = "" while True: data = wf.readframes(4000) if len(data) == 0: break if rec.AcceptWaveform(data): result = rec.Result() result_text += json.loads(result).get("text", "") wf.close() return result_text # MQTT配置 MQTT_BROKER = "localhost" # 或EMQX服务器地址 MQTT_PORT = 1883 # 存储最新温度和风扇状态 mytemp = None myfan = "off" def init_device(device_name, device_type): """初始化设备到数据库""" with app.app_context(): device = Device.query.filter_by(name=device_name).first() if not device: device = Device(name=device_name, type=device_type, status="offline") db.session.add(device) db.session.commit() def save_sensor_data(device_name, value): """保存传感器数据到 SensorData 表""" with app.app_context(): device = Device.query.filter_by(name=device_name).first() if device: sensor_data = SensorData(device_id=device.id, value=value) db.session.add(sensor_data) db.session.commit() def save_command(device_name, command): """保存控制指令到 Command 表""" with app.app_context(): device = Device.query.filter_by(name=device_name).first() if device: cmd = Command(device_id=device.id, command=command) db.session.add(cmd) db.session.commit() def on_connect(client, userdata, flags, rc): print(f"MQTT连接结果: {rc}") client.subscribe("topic/temp") # 订阅温度主题 def on_message(client, userdata, msg): global mytemp, myfan try: if msg.topic == "topic/temp": encrypted_payload = msg.payload.decode() decrypted_payload = decrypt_data(encrypted_payload, SECRET_KEY) payload = json.loads(decrypted_payload) mytemp = payload["temp"] print(f"解密温度数据: {mytemp}°C") # 保存传感器数据到数据库 save_sensor_data("temp", mytemp) # 根据温度控制风扇 if mytemp >= 30: control_fan("on") myfan = "on" else: control_fan("off") myfan = "off" # 实时推送温度数据到前端 socketio.emit('sensor_data', {'temp': mytemp, 'fan': myfan}) except Exception as e: print(f"解密失败或处理异常: {e}") def control_fan(command): """发送加密控制指令给风扇并保存到数据库""" payload = json.dumps({"fan": command}) encrypted_payload = encrypt_data(payload.encode(), SECRET_KEY) mqtt_client.publish("topic/fan", encrypted_payload) print(f"发送加密控制指令: {encrypted_payload}") # 保存控制指令到数据库 save_command("fan", command) def run_mqtt_client(): global mqtt_client mqtt_client = mqtt.Client() mqtt_client.on_connect = on_connect mqtt_client.on_message = on_message mqtt_client.username_pw_set("admin", "admin") # 账号密码验证 mqtt_client.connect(MQTT_BROKER, MQTT_PORT, 60) mqtt_client.loop_forever() @app.route('/') @login_required # 保护 chart.html def index(): return render_template('chart.html') # 渲染前端页面 if __name__ == "__main__": # 初始化设备 with app.app_context(): init_device("temp", "sensor") init_device("fan", "actuator") # 启动MQTT客户端线程 mqtt_thread = Thread(target=run_mqtt_client) mqtt_thread.daemon = True mqtt_thread.start() # 启动Flask-SocketIO应用,启用TLS socketio.run(app, host="0.0.0.0", port=9000, debug=False, ssl_context=("ca/server.crt", "ca/server.key"))怎么改使用一个页面上传按钮,可同时上传图像、音频、视频,并将识别结果显示在该页面。
06-12
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值