spring中depends-on与look-up的作用

本文探讨了Spring框架中depends-on属性的作用,解释了如何通过它实现bean之间的依赖关系,并展示了初始化及销毁方法的执行顺序。同时,文章还介绍了look-up方法的应用场景,即如何在单例Bean中使用原型范围的Bean,确保每次获取都是新的实例。

depends-on的作用

depends-on的作用一般是一个bean依赖于另外一个bean,被依赖的bean一般用于一些初始化和收尾的工作

  如在这个例子中,DependentBean依赖于ResourceBean,
  ResourceBean主要用于指定文件路径、打开文件、和关闭文件流
  而DependentBean只负责写文件,这样更能体现单一职责
  另外需要关注两个bean的init和destory的执行顺序
  先执行被依赖bean的init方法,然后执行依赖bean的方法

  先执行依赖bean的destory方法,后执行被依赖bean的destory方法

/**
 * 
 * 
 * <pre>HISTORY
 * ****************************************************************************
 *  ID   DATE           PERSON          REASON
 *  1    2018年1月17日      Simba.Hua         Create
 * ****************************************************************************
 * </pre>
 * @author Simba.Hua
 */
public class DependentBean {
	private ResourceBean resourceBean;
	
	public void write(String str) throws IOException {
		System.out.println("DependentBean写资源" + str);
		resourceBean.getFos().write(str.getBytes());
	}
	public void init() throws IOException {
		System.out.println("DependentBean:初始化");
		resourceBean.getFos().write("DependentBean:初始化\r\n".getBytes());
	}
	public void destroy() throws IOException {
		System.out.println("DependentBean:销毁资源");
		resourceBean.getFos().write("DependentBean:销毁资源\r\n".getBytes());
	}
	public ResourceBean getResourceBean() {
		return resourceBean;
	}
	public void setResourceBean(ResourceBean resourceBean) {
		this.resourceBean = resourceBean;
	}
	
}
package com.shux.springsource.depends.on;

import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;

/**
 * 描述:测试spring中,depends-on的作用
 * 
 * 
 * <pre>HISTORY
 * ****************************************************************************
 *  ID   DATE           PERSON          REASON
 *  1    2018年1月17日      Simba.Hua         Create
 * ****************************************************************************
 * </pre>
 * @author Simba.Hua
 */
public class ResourceBean {
	private FileOutputStream fos;
	private File file;
	public void init() {
		System.out.println("ResourceBean初始化");
		System.out.println("ResourceBean加载资源,做一些预操作");
		try {
			this.fos = new FileOutputStream(file);
		} catch (FileNotFoundException e) {
			e.printStackTrace();
		}
	}
	
	public void destroy() {
		System.out.println("ResourceBean销毁");
		System.out.println("ResourceBean释放资源,进行一些清理操作");
		try {
			fos.close();
		} catch (IOException e) {
			e.printStackTrace();
		}
	}

	public FileOutputStream getFos() {
		return fos;
	}

	public void setFos(FileOutputStream fos) {
		this.fos = fos;
	}

	public File getFile() {
		return file;
	}

	public void setFile(File file) {
		this.file = file;
	}
	
	
}

<!-- 测试depends-on -->
	<bean id="resourceBean" class="com.shux.springsource.depends.on.ResourceBean" init-method="init" destroy-method="destroy">
		<property name="file" value="D:/test.txt"></property>
	</bean>
	<bean id="dependentBean" class="com.shux.springsource.depends.on.DependentBean" init-method="init" destroy-method="destroy" depends-on="resourceBean">
		<property name="resourceBean" ref="resourceBean"></property>
	</bean>
 @Test
    public void testDependsOn() {
    	ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("mybeans.xml");
    	context.registerShutdownHook();
    	DependentBean dependentBean = context.getBean("dependentBean",DependentBean.class);
    	try {
			dependentBean.write("hello,是我正在写入哦\r\n");
		} catch (IOException e) {
			e.printStackTrace();
		}
    }
look-up


look-up 用于解决当一个单例的bean的属性中存在一个prototype的属性bean,如果用set,get注入
 * 则prototype的属性也会跟单例bean的生命周期一样,即属性bean只会产生一个实例bean,这不是我们想要的,
 * 我们想要的是可以产生不同的实例,而look-up就是解决这一问题的,用look-up即使宿主bean是单例的,只要属性bean是
 * prototype类型的,就可以产生多个实例.

如下例子,例子来自http://jinnianshilongnian.iteye.com/blog/1415461

package com.shux.springsource.lookup;
/**
 * 描述:
 * 
 * <pre>HISTORY
 * ****************************************************************************
 *  ID   DATE           PERSON          REASON
 *  1    2018年1月18日      Simba.Hua         Create
 * ****************************************************************************
 * </pre>
 * @author Simba.Hua
 */
public interface HelloApi {

	void sayHello();

}
package com.shux.springsource.lookup;
/**
 * 描述:
 * 
 * <pre>HISTORY
 * ****************************************************************************
 *  ID   DATE           PERSON          REASON
 *  1    2018年1月18日      Simba.Hua         Create
 * ****************************************************************************
 * </pre>
 * @author Simba.Hua
 */
public class Printer {
	private int counter;
	public void print(String type) {
		System.out.println(type + "  printer " + counter ++);
	}
}
package com.shux.springsource.lookup;
/**
 * 
 * <pre>HISTORY
 * ****************************************************************************
 *  ID   DATE           PERSON          REASON
 *  1    2018年1月18日      Simba.Hua         Create
 * ****************************************************************************
 * </pre>
 * @author Simba.Hua
 */
public abstract class HelloImpl implements HelloApi{
	private Printer printer;
	@Override
	public void sayHello() {
		printer.print("setter");
		createPrototypePrinter().print("prototype");
		createSingletonPrinter().print("singleton");
	}
	public abstract Printer createPrototypePrinter();
	public Printer createSingletonPrinter() {
		System.out.println("该方法不会被执行,如果执行了就出错了");
		return new Printer();
	}
	
	public void setPrinter(Printer printer) {
		this.printer = printer;
	}
}

	<!-- 测试look-up -->
	<bean id="prototypePrinter" class="com.shux.springsource.lookup.Printer" scope="prototype"></bean>
	
	<bean id="singletonPrinter" class="com.shux.springsource.lookup.Printer" scope="singleton"></bean>
	
	<bean id="prototypeHelloApi" class="com.shux.springsource.lookup.HelloImpl" scope="prototype">
		<property name="printer" ref="prototypePrinter"></property>
		<lookup-method name="createPrototypePrinter" bean="prototypePrinter"/>
		<lookup-method name="createSingletonPrinter" bean="singletonPrinter"/>
	</bean>
	
	<bean id="singletonHelloApi" class="com.shux.springsource.lookup.HelloImpl" scope="singleton">
		<property name="printer" ref="prototypePrinter"></property>
		<lookup-method name="createPrototypePrinter" bean="prototypePrinter"/>
		<lookup-method name="createSingletonPrinter" bean="singletonPrinter"/>
	</bean>
@Test
    public void testLookUp() {
    	ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("mybeans.xml");
    	System.out.println("------------Singleton sayHello-----------");
    	HelloApi singletonHelloApi = context.getBean("singletonHelloApi", HelloApi.class);
    	singletonHelloApi.sayHello();
    	singletonHelloApi = context.getBean("singletonHelloApi", HelloApi.class);
    	singletonHelloApi.sayHello();
    	System.out.println("------------Prototype sayHello-----------");
    	HelloApi prototypeHelloApi = context.getBean("prototypeHelloApi", HelloApi.class);
    	prototypeHelloApi.sayHello();
    	prototypeHelloApi = context.getBean("prototypeHelloApi", HelloApi.class);
    	prototypeHelloApi.sayHello();
    }
打印结果如下:

------------Singleton sayHello-----------
setter  printer 0
prototype  printer 0
singleton  printer 0
setter  printer 1
prototype  printer 0
singleton  printer 1
------------Prototype sayHello-----------
setter  printer 0
prototype  printer 0
singleton  printer 2
setter  printer 0
prototype  printer 0
singleton  printer 3








程序依赖 APIPark 依赖 MYSQL、Redis、InfluxDB 数据库,下表是数据库所需版本: 名称 版本要求 MYSQL >=5.7.x Redis >=6.2.x InfluxDB >=2.6 部署方式 使用脚本部署 备注 支持的系统列表: CentOS 7.9(7.x为代表) CentOS 8.5(8.x为代表) Ubuntu 20.04 Ubuntu 22.04 Debain 12.4 Alibaba Cloud Linux 3.2104 Alibaba Cloud Linux 2.1903 当前仅测试了上述部署的安装,若需要其他系统的一键部署,可给我们提交Issue。 输入一键部署指令: curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh 按照提示进行部署即可,部署完成后,将会展示部署信息。 Docker-Compose部署 使用此方法安装 APIPark,你需要安装 Docker 和 Docker Compose。 部署完成后,APIPark需要绑定API网关节点才可使用,具体教程请参考配置API网关 部署APIPark+API网关 编辑config.yml vi config.yml 修改文件配置 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://{IP}:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://{IP}:8099 - https://{IP}:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://{IP}:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 上述配置中的{IP}是一个变量,应该填写容器所在宿主机IP,假设宿主机IP为172.18.65.22,则此时配置应如下 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://172.18.65.22:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://172.18.65.22:8099 - https://172.18.65.22:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://172.18.65.22:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 配置说明 字段名称 说明 version 配置版本号,默认2 client openAPI配置信息 client -> listen_urls openAPI监听地址列表,格式:{协议}://{IP}:{端口} client -> advertise_urls openAPI广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} client -> certificate openAPI证书信息列表 gateway 转发代理核心程序配置信息 gateway -> listen_urls 转发代理核心程序监听地址列表,格式:{协议}://{IP}:{端口} gateway -> advertise_urls 转发代理核心程序广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} peer Raft节点配置信息,用于Raft集群节点配置同步、加入集群、离开集群等操作的通信 peer -> listen_urls Raft节点监听地址列表,格式:{协议}://{IP}:{端口} peer -> advertise_urls Raft节点广播地址列表,格式:{协议}://{IP/域名}:{端口} peer -> certificate Raft节点证书信息列表 编辑docker-compose.yml文件 vi docker-compose.yml 修改文件配置 version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD={MYSQL_PWD} - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD={MYSQL_PWD} - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD={REDIS_PWD} # Redis密码 - ADMIN_PASSWORD={ADMIN_PASSWORD} - Init=true - InfluxdbToken={INFLUXDB_TOKEN} apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN={INFLUXDB_TOKEN} - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass {REDIS_PWD}" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark apipark-apinto: image: eolinker/apinto-gateway container_name: apipark-apinto privileged: true restart: always ports: - "18099:8099" - "19400:9400" - "19401:9401" volumes: - /var/lib/apipark/apinto/data:/var/lib/apinto - /var/lib/apipark/apinto/log:/var/log/apinto - ${PWD}/config.yml:/etc/apinto/config.yml networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 上述配置中,使用 "" 包裹的均为变量,相关变量说明如下: **MYSQL_PWD:**mysql数据库root用户初始化密码 **REDIS_PWD:**redis密码 **ADMIN_PASSWORD:**APIPark Admin账号初始密码 **INFLUXDB_TOKEN:**InfluxDB 初始化Token 替换后配置示例如下: version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD=123456 - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD=123456 - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD=123456 # Redis密码 - ADMIN_PASSWORD=12345678 - Init=true - InfluxdbToken=dQ9>fK6&gJ apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=dQ9>fK6&gJ - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass 123456" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark apipark-apinto: image: eolinker/apinto-gateway container_name: apipark-apinto privileged: true restart: always ports: - "18099:8099" - "19400:9400" - "19401:9401" volumes: - /var/lib/apipark/apinto/data:/var/lib/apinto - /var/lib/apipark/apinto/log:/var/log/apinto - ${PWD}/config.yml:/etc/apinto/config.yml networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 启动APIPark docker-compose up -d 执行完成后,将出现如下图所示: 单独部署APIPark 编辑docker-compose.yml文件 vi docker-compose.yml 修改文件配置 version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD={MYSQL_PWD} - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD={MYSQL_PWD} - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD={REDIS_PWD} # Redis密码 - ADMIN_PASSWORD={ADMIN_PASSWORD} - Init=true - InfluxdbToken={INFLUXDB_TOKEN} apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN={INFLUXDB_TOKEN} - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass {REDIS_PWD}" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 上述配置中,使用 "" 包裹的均为变量,相关变量说明如下: **MYSQL_PWD:**mysql数据库root用户初始化密码 **REDIS_PWD:**redis密码 **ADMIN_PASSWORD:**APIPark Admin账号初始密码 **INFLUXDB_TOKEN:**InfluxDB 初始化Token 替换后配置示例如下: version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD=123456 - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD=123456 - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD=123456 # Redis密码 - ADMIN_PASSWORD=12345678 - Init=true - InfluxdbToken=dQ9>fK6&gJ apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=dQ9>fK6&gJ - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass 123456" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 启动APIPark docker-compose up -d 执行完成后,将出现如下图所示: 单独部署API网关 编辑config.yml vi config.yml 修改文件配置 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://{IP}:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://{IP}:8099 - https://{IP}:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://{IP}:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 上述配置中的{IP}是一个变量,应该填写容器所在宿主机IP,假设宿主机IP为172.18.65.22,则此时配置应如下 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://172.18.65.22:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://172.18.65.22:8099 - https://172.18.65.22:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://172.18.65.22:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 配置说明 字段名称 说明 version 配置版本号,默认2 client openAPI配置信息 client -> listen_urls openAPI监听地址列表,格式:{协议}://{IP}:{端口} client -> advertise_urls openAPI广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} client -> certificate openAPI证书信息列表 gateway 转发代理核心程序配置信息 gateway -> listen_urls 转发代理核心程序监听地址列表,格式:{协议}://{IP}:{端口} gateway -> advertise_urls 转发代理核心程序广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} peer Raft节点配置信息,用于Raft集群节点配置同步、加入集群、离开集群等操作的通信 peer -> listen_urls Raft节点监听地址列表,格式:{协议}://{IP}:{端口} peer -> advertise_urls Raft节点广播地址列表,格式:{协议}://{IP/域名}:{端口} peer -> certificate Raft节点证书信息列表 运行Docker容器,并挂载配置文件config.yml docker run -td -p 8099:8099 -p 9400:9400 -p 9401:9401 --privileged=true \ -v /var/lib/apinto/data:/var/lib/apinto \ -v /var/lib/apinto/log:/var/log/apinto \ -v ${PWD}/config.yml:/etc/apinto/config.yml \ --name=apinto_node eolinker/apinto-gateway:latest ./start.sh,我现在要从github上把这个项目拉起来,如何在idea里面运行跑起来,帮我从0到1把这个启动项目运行项目,最后打包需要配置的地方给我列出来
06-07
<think> 我们之前讨论的是设备树中的depends-on属性,现在用户想知道是否需要在.c文件中解析这个属性。 根据之前的讨论,设备树中的depends-on属性是用于声明设备节点之间的依赖关系,内核在初始化设备时会解析这些依赖关系,从而按照正确的顺序初始化设备节点。 因此,在驱动程序中(即.c文件)我们通常不需要显式解析depends-on属性,因为内核已经处理了依赖关系。 但是,如果驱动程序需要知道它所依赖的另一个设备的特定信息,那么可能需要通过设备树获取这个依赖设备的信息。例如,可以通过phandle来引用另一个设备节点,然后在驱动程序中获取该节点的信息。 总结: 1. 内核在设备初始化阶段会自动处理depends-on属性,确保设备按依赖顺序初始化,驱动程序无需在probe函数中解析depends-on属性。 2. 如果需要在驱动中获取依赖设备的信息,通常使用phandle引用,然后通过of_系列函数获取信息。 因此,在.c文件中,我们不需要直接解析depends-on属性。但是,我们可以通过其他方式(如phandle)获取依赖设备的信息。 下面详细说明: </think> 在设备树(DTSI)中声明的 `depends-on` 属性,**不需要在 .c 文件中显式解析**。这是由 Linux 内核的**设备初始化机制自动处理**的关键依赖关系。以下是详细说明: --- ### 一、内核自动处理机制 1. **设备树编译阶段** `.dtsi` 中的 `depends-on` 属性会被编译进 DTB 文件: ```dts usbc_dpin { compatible = "vendor,usbc-dpin"; depends-on = <&qti_battery_charger>; // 依赖声明 }; ``` 2. **内核启动阶段** 内核的 `of_platform` 子系统会: - 解析设备树中的依赖关系 - 构建设备初始化拓扑图 - 按依赖顺序调用驱动的 `probe()` 函数[^1][^2] --- ### 二、驱动开发实践 在 `.c` 文件中只需实现**延迟探测机制**(无需解析 `depends-on`): ```c static int usbc_dpin_probe(struct platform_device *pdev) { // 1. 检查依赖驱动是否就绪(例如通过设备存在性) struct device_node *charger_node = of_find_node_by_name(NULL, "qti_battery_charger"); if (!charger_node || !of_device_is_available(charger_node)) { dev_err(&pdev->dev, "Dependency not ready!\n"); return -EPROBE_DEFER; // 触发内核重试 } // 2. 正常初始化逻辑 // ... return 0; } ``` #### 关键点: - **`EPROBE_DEFER` 机制**: 内核会自动将探测失败的驱动加入重试队列,直到依赖项就绪(最长等待约 5 秒)[^2] - **依赖检查方法**(按需选择): ```c // 方法1:检查设备节点存在性 of_find_node_by_name(NULL, "qti_battery_charger"); // 方法2:检查 Power Supply 子系统 power_supply_get_by_name("battery"); // 方法3:检查特定符号(如导出函数) if (!symbol_get(qti_charger_init)) return -EPROBE_DEFER; ``` --- ### 三、完整工作流程 ```mermaid graph TD A[设备树声明 depends-on] --> B(内核构建依赖拓扑) B --> C{初始化排序} C --> D[先初始化 qti_battery_charger] D --> E[后初始化 usbc_dpin] E --> F[驱动返回 EPROBE_DEFER] F --> D ``` --- ### 四、验证方法 1. **查看初始化顺序**: ```bash dmesg | grep -e "qti_battery_charger" -e "usbc_dpin" ``` 输出示例: ``` [ 3.200000] qti_battery_charger: probe success [ 3.350000] usbc_dpin: dependency check passed ``` 2. **检查设备树解析**: ```bash ls /sys/firmware/devicetree/base/soc/usbc_dpin/depends-on ``` --- ### 常见误区 1. **无需手动解析属性** 以下代码是**不必要**的: ```c // 不需要!内核已自动处理 of_property_read_phandle(pdev->dev.of_node, "depends-on", 0); ``` 2. **避免循环依赖** 若依赖关系成环(如 A→B→A),内核会报错: ``` ERROR: cyclic dependency detected ``` > 通过设备树声明依赖 + 驱动级延迟探测,即可安全控制初始化顺序[^1][^2]。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值