SpringBoot+Druid+atomikos分布式事务管理

本文详细介绍了如何在SpringBoot项目中使用Druid和atomikos实现分布式事务管理,包括maven依赖引入、数据源配置、DruidConfig配置类编写及多数据源配置,通过Transactional注解实现业务操作的一致性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

WHAT:先看百度百科是怎么介绍《分布式事务》的。在日常开发中,会遇到在一个业务中需要操作多个数据源的场景,为保证数据的一致性,我们需要保证一系列的数据操作具有原子性,要么全部成功,要么全部失败。而这个管理多个数据源IO操作的原子性的过程,就叫做分布式事务管理。

WHY:在涉及到一个业务中需要做多次数据写入时,我们必须使用spring的事务管理来保证数据的一致性。同理,在一个业务中的多次数据写入操作针对到多个数据源的场景里,分布式事务的管理也成了开发者必须思考的部分。

HOW:参考网上的资料,分布式事务的实现有多种方式,此文只介绍SpringBoot+Druid+atomikos的一种方式。具体步骤如下:

1:引入atomikos的maven依赖

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jta-atomikos</artifactId>
</dependency>

2:数据源配置文件

spring:
  datasource:
    type: com.alibaba.druid.pool.DruidDataSource
    finance:
       driverClassName: com.mysql.jdbc.Driver
       url: jdbc:mysql://XXX:3306/XXX?useUnicode=true&characterEncoding=utf8
       username: ADMIN
       password: 123456
       initialSize: 1
       minIdle: 3
       maxActive: 20
       # 配置获取连接等待超时的时间
       maxWait: 60000
       # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
       timeBetweenEvictionRunsMillis: 60000
       # 配置一个连接在池中最小生存的时间,单位是毫秒
       minEvictableIdleTimeMillis: 30000
       validationQuery: select 'x'
       validationQueryTimeout: 3
       testWhileIdle: true
       testOnBorrow: false
       testOnReturn: false
       # 打开PSCache,并且指定每个连接上PSCache的大小
       poolPreparedStatements: true
       maxPoolPreparedStatementPerConnectionSize: 20
       # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙
       filters: stat,wall,slf4j
       # 通过connectProperties属性来打开mergeSql功能;慢SQL记录
       connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
    cloud_main:
       driverClassName: com.mysql.jdbc.Driver
       url: jdbc:mysql://XXX:3306/YYY?useUnicode=true&characterEncoding=utf8
       username: ADMIN
       password: 123456
       initialSize: 1
       minIdle: 3
       maxActive: 20
       # 配置获取连接等待超时的时间
       maxWait: 60000
       # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
       timeBetweenEvictionRunsMillis: 60000
       # 配置一个连接在池中最小生存的时间,单位是毫秒
       minEvictableIdleTimeMillis: 30000
       validationQuery: select 'x'
       validationQueryTimeout: 3
       testWhileIdle: true
       testOnBorrow: false
       testOnReturn: false
       # 打开PSCache,并且指定每个连接上PSCache的大小
       poolPreparedStatements: true
       maxPoolPreparedStatementPerConnectionSize: 20
       # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙
       filters: stat,wall,slf4j
       # 通过connectProperties属性来打开mergeSql功能;慢SQL记录
       connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000

3:新建一个DruidConfig.java,内容如下:

@Configuration
public class DruidConfig {

    @Bean(name = "coreDataSource")
    @Primary
    @Autowired
    public DataSource systemDataSource(Environment env) {
        AtomikosDataSourceBean ds = new AtomikosDataSourceBean();
        Properties prop = build(env, "spring.datasource.cloud_main.");
        ds.setXaDataSourceClassName("com.alibaba.druid.pool.xa.DruidXADataSource");
        ds.setUniqueResourceName("coreDB");
        ds.setPoolSize(5);
        ds.setXaProperties(prop);
        return ds;
    }

    @Autowired
    @Bean(name = "financeDataSource")
    public AtomikosDataSourceBean businessDataSource(Environment env) {
        AtomikosDataSourceBean ds = new AtomikosDataSourceBean();
        Properties prop = build(env, "spring.datasource.finance.");
        ds.setXaDataSourceClassName("com.alibaba.druid.pool.xa.DruidXADataSource");
        ds.setUniqueResourceName("financeDB");
        ds.setPoolSize(5);
        ds.setXaProperties(prop);
        return ds;
    }

    /**
     * 注入事物管理器
     * @return
     */
    @Bean(name = "xaTransactionManager")
    public JtaTransactionManager regTransactionManager () {
        UserTransactionManager userTransactionManager = new UserTransactionManager();
        UserTransaction userTransaction = new UserTransactionImp();
        return new JtaTransactionManager(userTransaction, userTransactionManager);
    }

    private Properties build(Environment env, String prefix) {
        Properties prop = new Properties();
        prop.put("url", env.getProperty(prefix + "url"));
        prop.put("username", env.getProperty(prefix + "username"));
        prop.put("password", env.getProperty(prefix + "password"));
        prop.put("driverClassName", env.getProperty(prefix + "driverClassName", ""));
        prop.put("initialSize", env.getProperty(prefix + "initialSize", Integer.class));
        prop.put("maxActive", env.getProperty(prefix + "maxActive", Integer.class));
        prop.put("minIdle", env.getProperty(prefix + "minIdle", Integer.class));
        prop.put("maxWait", env.getProperty(prefix + "maxWait", Integer.class));
        prop.put("poolPreparedStatements", env.getProperty(prefix + "poolPreparedStatements", Boolean.class));
        prop.put("maxPoolPreparedStatementPerConnectionSize",
                env.getProperty(prefix + "maxPoolPreparedStatementPerConnectionSize", Integer.class));
        prop.put("maxPoolPreparedStatementPerConnectionSize",
                env.getProperty(prefix + "maxPoolPreparedStatementPerConnectionSize", Integer.class));
        prop.put("validationQuery", env.getProperty(prefix + "validationQuery"));
        prop.put("validationQueryTimeout", env.getProperty(prefix + "validationQueryTimeout", Integer.class));
        prop.put("testOnBorrow", env.getProperty(prefix + "testOnBorrow", Boolean.class));
        prop.put("testOnReturn", env.getProperty(prefix + "testOnReturn", Boolean.class));
        prop.put("testWhileIdle", env.getProperty(prefix + "testWhileIdle", Boolean.class));
        prop.put("timeBetweenEvictionRunsMillis",
                env.getProperty(prefix + "timeBetweenEvictionRunsMillis", Integer.class));
        prop.put("minEvictableIdleTimeMillis", env.getProperty(prefix + "minEvictableIdleTimeMillis", Integer.class));
        prop.put("filters", env.getProperty(prefix + "filters"));
        return prop;
    }

    @Bean
    public ServletRegistrationBean druidServlet() {
        ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean(new StatViewServlet(), "/druid/*");
        //控制台管理用户,加入下面2行 进入druid后台就需要登录
        //servletRegistrationBean.addInitParameter("loginUsername", "admin");
        //servletRegistrationBean.addInitParameter("loginPassword", "admin");
        return servletRegistrationBean;
    }

    @Bean
    public FilterRegistrationBean filterRegistrationBean() {
        FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
        filterRegistrationBean.setFilter(new WebStatFilter());
        filterRegistrationBean.addUrlPatterns("/*");
        filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
        filterRegistrationBean.addInitParameter("profileEnable", "true");
        return filterRegistrationBean;
    }

    @Bean
    public StatFilter statFilter(){
        StatFilter statFilter = new StatFilter();
        statFilter.setLogSlowSql(true); //slowSqlMillis用来配置SQL慢的标准,执行时间超过slowSqlMillis的就是慢。
        statFilter.setMergeSql(true); //SQL合并配置
        statFilter.setSlowSqlMillis(1000);//slowSqlMillis的缺省值为3000,也就是3秒。
        return statFilter;
    }

    @Bean
    public WallFilter wallFilter(){
        WallFilter wallFilter = new WallFilter();
        //允许执行多条SQL
        WallConfig config = new WallConfig();
        config.setMultiStatementAllow(true);
        wallFilter.setConfig(config);
        return wallFilter;
    }
}

4:新建两个数据源的配置类,分别命名为CoreDataSourceConfig.java和FinanceDataSourceConfig.java

CoreDataSourceConfig.java如下:

@Configuration
// 扫描 Mapper 接口并容器管理
@MapperScan(basePackages = CoreDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "coreSqlSessionFactory")
public class CoreDataSourceConfig {

    @Autowired 
    @Qualifier("coreDataSource") 
    private DataSource ds;
    // 精确到 master 目录,以便跟其他数据源隔离
    static final String PACKAGE = "com.parkplus.cloud.park.core.dao";
    static final String MAPPER_LOCATION = "classpath:mybatis/cloud_main/*.xml";
  
    @Bean
    public SqlSessionFactory coreSqlSessionFactory()
            throws Exception {
        final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
        sessionFactory.setDataSource(ds);
        sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver()
                .getResources(MAPPER_LOCATION));
        return sessionFactory.getObject();
    }
    @Bean
    public SqlSessionTemplate sqlSessionTemplate() throws Exception {
        SqlSessionTemplate template = new SqlSessionTemplate(coreSqlSessionFactory()); // 使用上面配置的Factory
        return template;
    }
}

FinanceDataSourceConfig.java如下:

@Configuration
@MapperScan(basePackages = FinanceDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "financeSqlSessionFactory")
public class FinanceDataSourceConfig{

    @Autowired 
    @Qualifier("financeDataSource") 
    private DataSource ds;
    // 精确到 cluster 目录,以便跟其他数据源隔离
    static final String PACKAGE = "com.parkplus.finance.dao";
    static final String MAPPER_LOCATION = "classpath:mybatis/finance/*.xml";

    @Bean
    public SqlSessionFactory financeSqlSessionFactory()
            throws Exception {
        final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
        sessionFactory.setDataSource(ds);
        sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver()
                .getResources(MAPPER_LOCATION));
        sessionFactory.setPlugins(new Interceptor[]{paginationInterceptor});
        return sessionFactory.getObject();
    }

    @Bean
    public SqlSessionTemplate sqlSessionTemplate2() throws Exception {
        SqlSessionTemplate template = new SqlSessionTemplate(financeSqlSessionFactory()); // 使用上面配置的Factory
        return template;
    }
}

至此,配置已经全部完成。在需要事务控制的方法上使用@Transactional注解即可实现分布式事务,测试结果此处不再赘述。如果有同学还有疑惑之处,可以发邮件至971399161@qq.com,我们一起探讨,也欢迎同行能一起交流技术。

最后,感谢你的耐心阅读。

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值