multiple thread

本文展示了一个使用Java实现的多线程示例,其中包括两个静态嵌套类Nuaa1和Nuaa2,它们都继承了Thread类并实现了Runnable接口。在main方法中创建了Nuaa1的实例并启动了线程,输出Hello1。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

public class Property{
   
     public static class Nuaa1 extends Thread implements Runnable{ //静态嵌套类
       public void  run(){
          System.out.println("Hello1"); 
       }  
     
    } 
   
     public static class Nuaa2 extends Thread implements Runnable{  //静态嵌套类
     public void run(){
         System.out.println("Xiabing1"); 
      
     }
     
    }
   
   
    public static void main(String[] args){
        Property  obj =new Property();
        Property.Nuaa1 thread1= new Property.Nuaa1();
       thread1.start();
 
    }
}

 
  

这是yaml文件%YAML:1.0 #common parameters #support: 1 imu 1 cam; 1 imu 2 cam: 2 cam; imu: 1 num_of_cam: 1 imu_topic: "/mavros/imu/data_raw" #"/imu0" image0_topic: "/cam0/image_raw_pod" #"/cam0/image_raw" #image1_topic: "/cam1/image_raw" output_path: "~/output/" cam0_calib: "cam0_mei.yaml" #cam1_calib: "cam1_mei.yaml" image_width: 752 image_height: 480 # Extrinsic parameter between IMU and Camera. estimate_extrinsic: 1 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it. # 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess. body_T_cam0: !!opencv-matrix rows: 4 cols: 4 dt: d data: [0.0148655429818, -0.999880929698, 0.00414029679422, -0.0216401454975, 0.999557249008, 0.0149672133247, 0.025715529948, -0.064676986768, -0.0257744366974, 0.00375618835797, 0.999660727178, 0.00981073058949, 0, 0, 0, 1] #Multiple thread support multiple_thread: 1 #feature traker paprameters max_cnt: 150 # max feature number in feature tracking min_dist: 30 # min distance between two features freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image F_threshold: 1.0 # ransac threshold (pixel) show_track: 1 # publish tracking image as topic flow_back: 1 # perform forward and backward optical flow to improve feature tracking accuracy #optimization parameters max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time max_num_iterations: 8 # max solver itrations, to guarantee real time keyframe_parallax: 10.0 # keyframe selection threshold (pixel) #imu parameters The more accurate parameters you provide, the better performance acc_n: 0.22 #0.1 # accelerometer measurement noise standard deviation. gyr_n: 0.031 #0.01 # gyroscope measurement noise standard deviation. acc_w: 0.0064 #0.001 # accelerometer bias random work noise standard deviation. gyr_w: 0.00054 #0.0001 # gyroscope bias random work noise standard deviation. g_norm: #9.81007 # gravity magnitude #unsynchronization parameters estimate_td: 0 # online estimate time offset between camera and imu td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock) #loop closure parameters load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path' pose_graph_save_path: "~/output/pose_graph/" # save and load path save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0
03-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值