spring cloud alibaba
环境搭建
项目结构搭建
- 项目复制,修改
- 脚手架
阿里云原生脚手架:https://start.aliyun.com/
项目starter制作
版本选择
版本选择:https://github.com/alibaba/spring-cloud-alibaba/wiki/%E7%89%88%E6%9C%AC%E8%AF%B4%E6%98%8E
springcloudalibaba三大分支
2022.x 分支:适配 Spring Boot 3.0,Spring Cloud 2022.x 版本及以上的
2021.x 分支:适配 Spring Boot 2.4,Spring Cloud 2021.x 版本及以上
2.2.x 分支:适配 Spring Boot 为 2.4,Spring Cloud Hoxton 版本及以下的
<spring-boot.version>2.3.12.RELEASE</spring-boot.version>
<spring-cloud-alibaba.version>2.2.9.RELEASE</spring-cloud-alibaba.version>
组件版本关系
Spring Cloud Alibaba Version:2.2.9.RELEASE
Sentinel Version:1.8.5
Nacos Version:2.1.0
RocketMQ Version:4.9.4
Dubbo Version:~ pring Cloud Dubbo 从 2021.0.1.0 起已被移除出主干,不再随主干演进
Seata Version:1.5.2
nacos安装部署
下载:https://github.com/alibaba/nacos/releases/tag/2.1.0
官网:https://nacos.io/zh-cn/docs/v2/quickstart/quick-start.html
1. 启动、部署nacos
移动目录:sudo cp /Users/mac/soft/spring-cloud-alibaba/nacos-server-2.1.0.tar.gz /usr/local
解压:sudo tar -zxvf nacos-server-2.1.0.tar.gz
删除:sudo rm -rf nacos-server-2.1.0.tar.gz
启动nacos:
cd nacos/bin
sudo sh startup.sh -m standalone
访问nacos:
http://127.0.0.1:8848/nacos
默认密码是nacos/nacos
关闭nacos:
sudo sh shutdown.sh
nacos-server若不启动,服务也无法启动
2. pom依赖
1)根pom文件,Spring Cloud Alibaba BOM 包含了它所使用的所有依赖的版本,将版本控制委派给BOM
<spring.cloud.version>Hoxton.SR12</spring.cloud.version>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring.cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<spring-cloud-alibaba.version>2.2.9.RELEASE</spring-cloud-alibaba.version>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-alibaba-dependencies</artifactId>
<version>${spring-cloud-alibaba.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
2)service模块下pom,服务注册/发现
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
3. yaml配置
1)增加配置
spring:
cloud:
nacos:
discovery:
server-addr: 127:0:0:1:8848
2)@EnableDiscoveryClient:开启服务注册与发现功能
负载均衡
2023版idea,开启多个服务:
IDEA 2023 没有Allow parallel run,点击edit configurations, 点击Modify options,在弹窗中选择Allow multiple instances,最后Apply一下就ok了
基于Ribbon实现负载均衡
启动类中增加
@Bean
@LoadBalanced
public RestTemplate getRestTemplate(){
return new RestTemplate();
}
service中引入并使用
@Resource
private RestTemplate restTemplate;
return restTemplate.getForObject("http://" + "pay-server" + "/hello/create", String.class);
Ribbon的负载均衡策略
基于Feign实现服务调用
声明式的伪Http客户端,使得调用远程服务就像调用本地服务一样简单, 只需要创建一个接口并添加一个注解即可
Nacos很好的兼容了Feign, Feign默认集成了 Ribbon, 所以在Nacos下使用Fegin默认就实现了负载均衡的效果
<!--fegin组件-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
//启动类开启Fegin
@EnableFeignClients
新建一个接口,//@FeignClient+@GetMapping 就是一个完整的请求路径
如果 RestTemplate 的响应是 String 类型,则会将响应交给 StringHttpMessageConverter 进行转化。
StringHttpMessageConverter 默认的编码是 ISO-8859-1,所以会出现乱码
springboot默认序列化方式:jackson-databind
@ResponseBody就可以自动帮我们将服务端返回的对象序列化成json字符串,
在传递json body参数时候 通过在对象参数上@RequestBody注解就可以自动帮我们将前端传过来的json字符串反序列化成java对象
sentinel安装部署
模拟高并发场景:
1. 休眠100ms
2. #tomcat的最大并发值修改为10,默认是200
tomcat:
threads:
max: 10
3. apipost压测
由于a方法囤积了大量请求, 导致b方法的访问出现了问题,这就是服务雪崩的雏形。
雪崩发生的原因很多,比如高并发下某个方法响应变慢,比如服务器资源耗尽,做好服务容错,雪落而不雪崩
常见容错方案
隔离:线程池隔离、信号量隔离
超时:断开请求,释放线程
限流
熔断:熔断关闭状态,熔断开启状态,半熔断状态
降级:为服务提供一个托底方案
常见容错组件
Hystrix:Hystrix是由Netflix开源的一个延迟和容错库,用于隔离访问远程系统、服务或者第三方库,防止级联失败,从而提升系统的可用性与容错性。
Resilience4J:一款非常轻量、简单,并且文档非常清晰、丰富的熔断工具,这也是Hystrix官方推荐的替代产品
Sentinel:(分布式系统的流量防卫兵) 是阿里开源的一套用于服务容错的综合性解决方案。它以流量为切入点, 从流量控制、熔断降级、系统负载保护等多个维度来保护服务的稳定性。
微服务集成Sentinel
1. service模块下pom依赖
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-sentinel</artifactId>
</dependency>
2. 安装Sentinel控制台
下载:https://github.com/alibaba/Sentinel/releases
sentinel-dashboard-1.8.5.jar
3.直接使用jar命令启动项目(控制台本身是一个SpringBoot项目)
cd /Users/mac/dengsiwen/soft/spring-cloud-alibaba
java -Dserver.port=8080 -Dcsp.sentinel.dashboard.server=localhost:8080 -Dproject.name=sentinel-dashboard -jar sentinel-dashboard-1.8.5.jar
4.通过浏览器访问localhost:8080 进入控制台 ( 默认用户名密码是 sentinel/sentinel )
5.yml增加配置
spring: cloud: sentinel: transport: port: 9999 #跟控制台交流的端口,随意指定一个未使用的端口即可 dashboard: localhost:8080 # 指定控制台服务的地址
报错问题处理
加入sentinel后,启动服务报错:
because no Bean Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>6.1.7.Final</version>
</dependency>
sentinel客户端无法连接服务
Failed to fetch metric from <http://192.168.112.184:8720/metric?startTime=1697103198000&endTime=1697103204000&refetch=false> (ConnectionException: Connection refused)
原因可能是端口问题,暂未解决:https://blog.youkuaiyun.com/qq_45064687/article/details/129915303
sentinel控制台左侧没有服务:如果pom和ymal都正确的话,可能要过一段时间才会显示出来
springcloudalibaba sentinel 机器列表有正常连接,请求接口后实时监控无数据
原因是:项目中配置了WebMvcConfigurationSupport ,导致sentinel拦截器失效
https://blog.youkuaiyun.com/qq_36762765/article/details/131020007
gateway
gateway项目启动相关报错
springboot启动成功后,马上{dataSource-1} closing ,是因为少引用了jar包。但是引入了gateway的jar包后,下面的依赖又需要去掉
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
springboot 启动报javax.servlet.Registration$Dynamic
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
</dependency>
更换为
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
</dependency>
gateway相关jar引入,其他jar比如springboot、nacos
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
yaml配置,启用nacos:@EnableDiscoveryClient
spring:
gateway:
discovery:
locator:
enabled: true #让gateway可以发现nacos中的微服务
routes:
- id: order_route
uri: lb://order-server # lb指的是从nacos中按照名称获取微服务,并遵循负载均 衡策略
predicates:
- Path=/order/** # 当请求路径满足Path指定的规则时,才进行路由转发
filters:
- StripPrefix=1 # 转发之前去掉1层路径
内置路由断言工厂
自定义路由断言工厂
报错:reactor.core.Exceptions E r r o r C a l l b a c k N o t I m p l e m e n t e d : o r g . s p r i n g f r a m e w o r k . b o o t . c o n t e x t . p r o p e r t i e s . b i n d . B i n d E x c e p t i o n : F a i l e d t o b i n d p r o p e r t i e s u n d e r ′ ′ t o c o m . e x a m p l e . g a t e w a y . f a c t o r y . A g e R o u t e P r e d i c a t e F a c t o r y ErrorCallbackNotImplemented: org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under '' to com.example.gateway.factory.AgeRoutePredicateFactory ErrorCallbackNotImplemented:org.springframework.boot.context.properties.bind.BindException:Failedtobindpropertiesunder′′tocom.example.gateway.factory.AgeRoutePredicateFactoryConfig
原因是:public static class Config,少了public static修饰
访问http://localhost:9888/order/hello/create?age=10会报404
访问http://localhost:9888/order/hello/create?age=30能正常通过
package com.example.gateway.factory;
import lombok.Getter;
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang.StringUtils;
import org.springframework.cloud.gateway.handler.predicate.AbstractRoutePredicateFactory;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import java.util.Arrays;
import java.util.List;
import java.util.function.Predicate;
/**
* @author dengsiwen
* @date 2023-10-24 13:16:45
* @desc
*/
//泛型 用于接收一个配置类,配置类用于接收中配置文件中的配置
@Component
@Slf4j
public class AgeRoutePredicateFactory extends AbstractRoutePredicateFactory<AgeRoutePredicateFactory.Config> {
public static final String[] KEY_ARRAY = {"minAge", "maxAge"};// 对象属性
public AgeRoutePredicateFactory() {
super(AgeRoutePredicateFactory.Config.class);
}
//用于从配置文件中获取参数值赋值到配置类中的属性上
@Override
public List<String> shortcutFieldOrder() {
//这里的顺序要跟配置文件中的参数顺序一致
return Arrays.asList(KEY_ARRAY);
}
@Override
public Predicate<ServerWebExchange> apply(AgeRoutePredicateFactory.Config config) {
return new Predicate<ServerWebExchange>() {
@Override
public boolean test(ServerWebExchange serverWebExchange) {
//从serverWebExchange获取传入的参数
String ageStr = serverWebExchange.getRequest().getQueryParams().getFirst("age");
log.info("ageStr:{}", ageStr);
if (StringUtils.isNotBlank(ageStr)) {
int age = Integer.parseInt(ageStr);
return age > config.getMinAge() && age < config.getMaxAge();
}
return true;
}
};
}
//自定义一个配置类, 用于接收配置文件中的参数
@Getter
@Setter
public static class Config {
private int minAge;
private int maxAge;
}
}
内置局部过滤器
在SpringCloud Gateway中内置了很多不同类型的网关路由过滤器。具体如下:
过滤器工厂 | 作用 | 参数 |
---|---|---|
RewritePath | 重写原始的请求路径 | 原始路径正则表达式以 及重写后路径的正则表 达式 |
自定义局部过滤器
第1步:在配置文件中,添加一个Log的过滤器配置
第2步:自定义一个过滤器工厂,实现方法
filters:
- StripPrefix=1
- Log=true,false # 控制日志是否开启
package com.example.gateway.filter;
import com.alibaba.fastjson.JSON;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.cloud.gateway.filter.GatewayFilter;
import org.springframework.cloud.gateway.filter.GatewayFilterChain;
import org.springframework.cloud.gateway.filter.factory.AbstractGatewayFilterFactory;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
import java.util.Arrays;
import java.util.List;
/**
* @author dengsiwen
* @date 2023-10-24 17:31:51
* @desc 自定义局部过滤器
*/
@Component
@Slf4j
public class LogGatewayFilterFactory extends AbstractGatewayFilterFactory<LogGatewayFilterFactory.Config> {
//构造函数
public LogGatewayFilterFactory() {
super(LogGatewayFilterFactory.Config.class);
}
//读取配置文件中的参数 赋值到 配置类中
@Override
public List<String> shortcutFieldOrder() {
return Arrays.asList("consoleLog", "cacheLog");
}
//过滤器逻辑
@Override
public GatewayFilter apply(LogGatewayFilterFactory.Config config) {
return new GatewayFilter() {
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
log.info("filter config:{}", JSON.toJSONString(config));
if (config.isCacheLog()) {
log.info("cacheLog已经开启了....");
}
if (config.isConsoleLog()) {
log.info("consoleLog已经开启了....");
}
return chain.filter(exchange);
}
};
}
//配置类 接收配置参数
@Setter
@Getter
@NoArgsConstructor
public static class Config {
private boolean consoleLog;
private boolean cacheLog;
}
}
内置全局过滤器
自定义全局过滤器
鉴权逻辑:
当客户端第一次请求服务时,服务端对用户进行信息认证(登录)
认证通过,将用户信息进行加密形成token,返回给客户端,作为登录凭证
以后每次请求,客户端都携带认证的token
服务端对token进行解密,判断是否有效
package com.example.gateway.filter;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang.StringUtils;
import org.springframework.cloud.gateway.filter.GatewayFilterChain;
import org.springframework.cloud.gateway.filter.GlobalFilter;
import org.springframework.core.Ordered;
import org.springframework.http.HttpStatus;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
/**
* @author dengsiwen
* @date 2023-10-24 17:47:44
* @desc 自定义全局过滤器需要实现GlobalFilter和Ordered接口
*/
@Component
@Slf4j
public class AuthGlobalFilter implements GlobalFilter, Ordered {
//完成判断逻辑
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
String token = exchange.getRequest().getQueryParams().getFirst("token");
if (StringUtils.isBlank(token)) {
log.info("鉴权失败");
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
//调用chain.filter继续向下游执行
return chain.filter(exchange);
}
//顺序,数值越小,优先级越高
@Override
public int getOrder() {
return 0;
}
}
网关限流
从1.6.0版本开始,Sentinel提供了SpringCloud Gateway的适配模块,可以提供两种资源维度的限流:
route维度:即在Spring配置文件中配置的路由条目,资源名为对应的routeId
自定义API维度:用户可以利用Sentinel提供的API来自定义一些API分组
<dependency>
<groupId>com.alibaba.csp</groupId>
<artifactId>sentinel-spring-cloud-gateway-adapter</artifactId> </dependency>
编写配置类
基于Sentinel 的Gateway限流是通过其提供的Filter来完成的,使用时只需注入对应的
SentinelGatewayFilter实例以及 SentinelGatewayBlockExceptionHandler 实例即可
链路追踪
1.搭建elk环境
具体参考:https://blog.youkuaiyun.com/yulishi12/article/details/129229510
2.yml增加kafka配置
spring:
kafka:
bootstrap-servers: 192.168.111.94:9092
log-bootstrap-servers: 192.168.111.94:9092 #日志kafka连接
log-topic: api_log #日志kafka topic
producer:
value-serializer: org.apache.kafka.common.serialization.StringSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
3.service层pom文件
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.5.1</version>
</dependency>
<dependency>
<groupId>cn.hutool</groupId>
<artifactId>hutool-all</artifactId>
<version>5.5.1</version>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.1</version>
</dependency>
<!--logback.xml中的X-B3-TraceId,通过下面的jar包可以自动获取-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
4.配置类
package com.dsw.order.web.aop;
import lombok.extern.slf4j.Slf4j;
import org.slf4j.MDC;
import org.springframework.core.Ordered;
import org.springframework.http.HttpHeaders;
import org.springframework.stereotype.Component;
import org.springframework.web.servlet.HandlerInterceptor;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
/**
* @author dengsiwen
* @date 2023-10-31 14:51:37
* @desc
*/
@Component
@Slf4j
public class CommonInterceptor implements HandlerInterceptor, Ordered {
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
MDC.put("url", request.getRequestURI());
String token = request.getHeader(HttpHeaders.AUTHORIZATION);
MDC.put("userId", null);
MDC.put("token", token);
MDC.put("User-Agent", request.getHeader("User-Agent"));
MDC.put("Host", request.getHeader("Host"));
MDC.put("ip", request.getHeader("ip"));
log.info("preHandle url:{}", request.getRequestURI());
return true;
}
@Override
public int getOrder() {
return Integer.MAX_VALUE;
}
}
WebConfig配置类中注册
registry.addInterceptor(new CommonInterceptor()).addPathPatterns("/**");
#配置类KafkaLogAppender
package com.dsw.pay.web.aop;
import ch.qos.logback.core.UnsynchronizedAppenderBase;
import ch.qos.logback.core.encoder.Encoder;
import cn.hutool.core.io.IoUtil;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.io.ByteArrayInputStream;
import java.util.Properties;
/**
* @author dengsiwen
* @date 2023-10-31 13:24:01
* @desc
*/
@Slf4j
public class KafkaLogAppender<E> extends UnsynchronizedAppenderBase<E> {
protected Encoder<E> encoder;
private String bootstrapServers;
private Producer<String, String> kafkaProducer;
private String logTopic;
@SneakyThrows
@Override
public void start() {
super.start();
if (!bootstrapServers.contains("UNDEFINED")) {
Properties props = new Properties();
props.put("bootstrap.servers", bootstrapServers);
props.put("retries", 0);
props.put("batch.size", 0);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
kafkaProducer = new KafkaProducer<>(props);
kafkaProducer.send(new ProducerRecord<>(logTopic, "kafka log start"));
log.info("kafka log start...");
}
}
@Override
protected void append(E e) {
if (kafkaProducer != null) {
byte[] bytes = encoder.encode(e);
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(logTopic, IoUtil.read(new ByteArrayInputStream(bytes), "utf-8"));
kafkaProducer.send(producerRecord);
log.info("kafka log append...");
}
}
public Encoder<E> getEncoder() {
return this.encoder;
}
public void setEncoder(Encoder<E> encoder) {
this.encoder = encoder;
}
public String getBootstrapServers() {
return bootstrapServers;
}
public void setBootstrapServers(String bootstrapServers) {
this.bootstrapServers = bootstrapServers;
}
public String getLogTopic() {
return logTopic;
}
public void setLogTopic(String logTopic) {
this.logTopic = logTopic;
}
}
5.logback.xml
<springProperty scope="context" name="appName" source="spring.application.name"/>
<springProperty scope="context" name="kafkaLogServer" source="spring.kafka.log-bootstrap-servers"/>
<springProperty scope="context" name="kafkaLogTopic" source="spring.kafka.log-topic"/>
<appender name="kafkaLog" class="com.dsw.order.web.aop.KafkaLogAppender">
<bootstrapServers>${kafkaLogServer}</bootstrapServers>
<logTopic>${kafkaLogTopic}</logTopic>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"throwable": "%throwable",
"timestamp": "%date{\"yyyy-MM-dd'T'HH:mm:ss,SSSZ\"}",
"level": "%level",
"url": "%X{url:-}",
"userId": "%X{userId:-}",
"token": "%X{token:-}",
"version": "${version:-}",
"appName": "${appName:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"parent": "%X{X-B3-ParentSpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"class": "%logger{40}",
"message": "%message",
"user-agent": "%X{User-Agent:-}",
"from": "%X{Host:-}",
"ip": "%X{ip:-}"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<root level="info">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="kafkaLog"/>
</root>