直接基于线程
new thread(()->{
while (true){
try{
thread.sleep(3000);
}catch(InterruptedException){
e.printStackTrace();
}
sout("执行任务"+LocalDateTime.now());
}
}).start();
System.in.read();
以上代码能做到每3秒执行一次
Timer
Timer timer = new Timer();
timer.schedule(new TimerTask() {
@Override
public void run() {
System.out.println("执行任务"+LocalDateTime.now());
}
}, 5000,3000);
以上代码5秒之后开始执行,后续每隔三秒执行一次
ScheduledThreadPoolExecutor
ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(10);
executor.scheduleWithFixedDelay(new Runnable() {
@Override
public void run() {
System.out.println("执行任务"+LocalDateTime.now());
}
}, 5, 3, TimeUnit.SECONDS);
以上代码5秒之后开始执行,后续每隔3秒执行一次
注意:scheduleWithFixedDelay 指的是 以"固定的延时执行" delay(延时)指的是一次执行终止和下一次执行开始之间的延迟
scheduleAtFixedRate 指的是“以固定的频率”执行 period(周期)指的是两次成功执行的时间
quartz
public class HelloJob implements Job {
@Override
public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
System.out.println("执行任务"+ LocalDateTime.now());
}
}
public static void main(String[] args) throws IOException, SchedulerException {
Scheduler scheduler = StdSchedulerFactory.getDefaultScheduler();
Trigger trigger = TriggerBuilder.newTrigger()
.withSchedule(SimpleScheduleBuilder.simpleSchedule().withIntervalInSeconds(3)
.repeatForever()).build();
JobDetail job = JobBuilder.newJob(HelloJob.class).build();
scheduler.scheduleJob(job,trigger);
scheduler.start();}
Spring Task
@SpringBootApplication
@EnableScheduling
public class Application {public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
@Component
public class MySpringTask {@Scheduled(cron = "0/5 * * * * ?")
public void test(){
System.out.println("执行SpringTask");
}
}
以上几种方式都有几个共同的缺点:
1.单线程执行,若前一个任务执行时间较长,会导致下一个任务饥饿阻塞
2.无分布式协调机制,如果只有一个节点就会单点风险,如果部署多个节点,就会有并发执行问题
3.随着任务规模增多,无统一视角对其进行任务进度进行追踪和管控
4.功能比较简单,没有超时,重试等特性
zookeeper
下载完zookeeper
1.将conf目录下的zoo_sample.cfg文件改为zoo.cfg
2.进入bin目录下,进入cmd运行zkServer.cmd脚本,启动zookeeper
ElasticJob简单作业
启动完zookeeper,可以使用ElasticJob了.
新建一个maven项目,添加依赖
<dependency>
<groupId>org.apache.shardingsphere.elasticjob</groupId>
<artifactId>elasticjob-lite-core</artifactId>
<version>3.0.1</version>
</dependency>
新建一个类,实现SimpleJob
public class MySimpleJob implements SimpleJob {
@Override
public void execute(ShardingContext context) {
System.out.println("执行任务"+ LocalDateTime.now());
}
}
新建启动类
public class TestMySimpleJob {
public static void main(String[] args) {
new ScheduleJobBootstrap(createRegistryCenter(), new MySimpleJob(), createJobConfiguration()).schedule();
}// 连接Zookeeper
private static CoordinatorRegistryCenter createRegistryCenter() {
CoordinatorRegistryCenter regCenter = new ZookeeperRegistryCenter(new ZookeeperConfiguration("localhost:2181", "my-job"));
regCenter.init();
return regCenter;
}// 创建作业配置
private static JobConfiguration createJobConfiguration() {
return JobConfiguration.newBuilder("MySimpleJob", 1)
.cron("0/3 * * * * ?")
.build();
}
}
以上代码每隔3秒执行MySimpleJob的任务
代码运行时需要日志slf4j,直接引入
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.36</version>
</dependency>
DataflowJob-数据流作业
新建一个类实现DataflowJob接口
public class MyDataflowJob implements DataflowJob<String> {
@Override
public List<String> fetchData(ShardingContext shardingContext) {
List<String> data = new ArrayList<>();
data.add("数据1");
data.add("数据2");
data.add("数据3");
data.add("数据4");
return data;
}@Override
public void processData(ShardingContext shardingContext, List<String> list) {
System.out.println(LocalDateTime.now()+"处理数据:"+list);
}
}
新建一个启动类进行测试
public class TestMyDataflowJob {
public static void main(String[] args) {
new ScheduleJobBootstrap(createRegistryCenter(), new MyDataflowJob(), createJobConfiguration()).schedule();
}// 连接Zookeeper
private static CoordinatorRegistryCenter createRegistryCenter() {
CoordinatorRegistryCenter regCenter = new ZookeeperRegistryCenter(new ZookeeperConfiguration("localhost:2181", "my-job"));
regCenter.init();
return regCenter;
}// 创建作业配置
private static JobConfiguration createJobConfiguration() {
return JobConfiguration.newBuilder("MyDataflowJob", 1)
.cron("0/3 * * * * ?")
.build();}
}
以上代码会每隔3s执行以下MyDataflowJob,每次执行任务时会先调用fetchData()方法获取数据,如果获取到数据了(返回值不为 null 或集合容量不为0时),就会执行processData()方法处理数据。
若这样配置Job
private static JobConfiguration createJobConfiguration() {
return JobConfiguration.newBuilder("MyDataflowJob", 1)
.cron("0/3 * * * * ?")
.setProperty(DataflowJobProperties.STREAM_PROCESS_KEY, "true")
.overwrite(true)
.build();}
1.stream.process=true,表示开启流式处理,默认为false
2.overwrite=true,表示要重写job配置,若不设置,新修改的配置不生效
这样改之后,代码会一直执行,而不是间隔3秒执行一次.因为如果开启流式处理,则作业只有在fetchData方法的返回值为null或集合容量为空时,才会停止
脚本作业
定时执行某个脚本
public class TestScriptJob {
public static void main(String[] args) {
new ScheduleJobBootstrap(createRegistryCenter(), "SCRIPT", createJobConfiguration()).schedule();
}private static CoordinatorRegistryCenter createRegistryCenter() {
CoordinatorRegistryCenter regCenter = new ZookeeperRegistryCenter(new ZookeeperConfiguration("localhost:2181", "my-job"));
regCenter.init();
return regCenter;
}private static JobConfiguration createJobConfiguration() {
// 创建作业配置
return JobConfiguration.newBuilder("MyScriptJob", 1)
.cron("0/5 * * * * ?")
.setProperty(ScriptJobProperties.SCRIPT_KEY, "java -version")
.overwrite(true)
.build();}
}
注意ScheduleJobBootstrap的第二个参数为"SCRIPT",另外通过配置script.command.line来配置要执行的脚本.其底层是利用CommandLine来执行的命令,所以只要在机器上能执行的命令,都可以在这里配置并执行.
HTTP作业(3.0.0-beta提供)
public class TestHttpJob {
public static void main(String[] args) {
new ScheduleJobBootstrap(createRegistryCenter(), "HTTP", createJobConfiguration()).schedule();
}private static CoordinatorRegistryCenter createRegistryCenter() {
CoordinatorRegistryCenter regCenter = new ZookeeperRegistryCenter(new ZookeeperConfiguration("localhost:2181", "my-job"));
regCenter.init();
return regCenter;
}private static JobConfiguration createJobConfiguration() {
// 创建作业配置
return JobConfiguration.newBuilder("MyHttpJob", 1)
.cron("0/5 * * * * ?")
.setProperty(HttpJobProperties.URI_KEY, "http://www.baidu.com")
.setProperty(HttpJobProperties.METHOD_KEY, "GET")
.setProperty(HttpJobProperties.DATA_KEY, "source=ejob") // 请求体
.overwrite(true)
.build();}
}
注意ScheduleJobBootstrap的第二个参数为"http",另外通过设置http.uri,http.method等参数来配置请求信息,其底层是用HttpURLConnection来实现
若要看到调用结果,需要把日志级别设置为debug,因为在HttpJobExecutor源码中是这么打印请求结果的
if (this.isRequestSucceed(code)) {
log.debug("HTTP job execute result : {}", result.toString());
} else {
log.warn("HTTP job {} executed with response body {}", jobConfig.getJobName(), result.toString());
}
一次性调度
我们可以使用ScheduleJobBootstrap来进行定时调度,也可以通过指定cron来进行控制(例),
ScheduleJobBootstrap scheduleJobBootstrap = new ScheduleJobBootstrap(createRegistryCenter(), new MySimpleJob(), createJobConfiguration());
scheduleJobBootstrap.schedule();
另外也可以使用OneOffJobBootstrap来进行一次性调度
OneOffJobBootstrap jobBootstrap = new OneOffJobBootstrap(createRegistryCenter(), new MySimpleJob(), createJobConfiguration());
// 可多次调用一次性调度
jobBootstrap.execute();
jobBootstrap.execute();
jobBootstrap.execute();
注:若使用OneOffJobBootstrap,那么JonConfiguration中不能指定cron
错误处理策略
在执行任务过程中,如果出现错误了,应对策略有以下几种:
1.记录日志:只记录日志,不中断作业执行,默认就是这种策略
2.抛出异常:抛出系统异常,并中断作业执行
3.忽略异常:忽略系统异常,不中断作业执行
4.邮件通知策略
5.企业微信通知策略
6.钉钉通知策略
配置错误处理策略
private static JobConfiguration createJobConfiguration() {
return JobConfiguration.newBuilder("MySimpleJob", 1)
.cron("0/5 * * * * ?")
// .jobErrorHandlerType("LOG") // 记录日志
// .jobErrorHandlerType("THROW") // 抛出异常
// .jobErrorHandlerType("IGNORE") // 忽略异常
.overwrite(true)
.build();}
企业微信通知
先添加机器人,在手机端选择某个群组,开启机器人,然后复制WEHOOK
private static JobConfiguration createJobConfiguration() {
return JobConfiguration.newBuilder("MySimpleJob", 1)
.cron("0/5 * * * * ?")
.jobErrorHandlerType("WECHAT")
.setProperty(WechatPropertiesConstants.WEBHOOK, "WEBHOOK地址")
.overwrite(true)
.build();
}