海山数据库(He3DB)源码详解:海山MySQL redo日志-日志文件组

# 1、日志文件组
MySQL的数据目录下默认有两个名为ib_logfile0ib_logfile1的文件,log buffer中的日志默认情况下就是刷新到这两个磁盘文件中。
如果对默认的redo日志文件不满意,可以通过下边几个启动参数来调节:

  • innodb_log_group_home_dir:该参数指定了redo日志文件所在的目录,默认值就是当前的数据目录。
  • innodb_log_file_size:该参数指定了每个redo日志文件的大小,默认值为48MB
  • innodb_log_files_in_group:该参数指定redo日志文件的个数,默认值为2,最大值为100。

从上边的描述中可以看到,磁盘上的redo日志文件不只一个,而是以一个日志文件组的形式出现的。这些文件以ib_logfile[数字](数字可以是0、1、2…)的形式进行命名。在将redo日志写入日志文件组时,是从ib_logfile0开始写,如果ib_logfile0写满了,就接着ib_logfile1写,同理,ib_logfile1写满了就去写ib_logfile2,依此类推。如果写到最后一个文件该咋办?那就重新转到ib_logfile0继续写,所以整个过程如下图所示:

在这里插入图片描述

总共的redo日志文件大小其实就是:innodb_log_file_size × innodb_log_files_in_group

2、redo日志文件格式

前文说过log buffer本质上是一片连续的内存空间,被划分成了若干个512字节大小的block。将log buffer中的redo日志刷新到磁盘的本质就是把block的镜像写入日志文件中,所以redo日志文件其实也是由若干个512字节大小的block组成。redo日志文件组中的每个文件大小都一样,格式也一样,都是由两部分组成:

  • 前2048个字节,也就是前4个block是用来存储一些管理信息的
  • 从第2048字节往后是用来存储log buffer中的block镜像的
    所以上文所说的循环使用redo日志文件,其实是从每个日志文件的第2048个字节开始算,如图所示:

在这里插入图片描述

这里需要介绍一下每个redo日志文件前2048个字节,也就是前4个特殊block的格式是怎样的,如图所示。

在这里插入图片描述

其中,第一个block为log file header,包含以下属性:

  • LOG_HEADER_FORMAT : 4B,redo日志的版本,该值永远为1
  • LOG_HEADER_PAD1 : 4B,字节填充
  • LOG_HEADER_START_LSN : 8B,标记本redo日志文件开始的LSN值,也就是文件偏移量为2048字节初对应的LSN值
  • LOG_HEADER_CREATOR : 32B,一个字符串,标记本redo日志文件的创建者是谁。正常运行时该值为MySQL的版本号,比如:"MySQL使用mysqlbackup命令创建的redo日志文件的该值为"ibbackup"和创建时间。
  • LOG_BLOCK_CHECKSUM : 4B,本block的校验值

在这里插入图片描述

第二个block为checkpoint1,该块中记录了关于checkpoint一些属性:

  • LOG_CHECKPOINT_NO : 8B,服务器做的checkpoint编号,每做一次checkpoint,该值就加1。
  • LOG_CHECKPOINT_LSN : 8B,服务器做checkpoint结束时对应的LSN值,系统奔溃恢复时将从该值开始。
  • LOG_CHECKPOINT_OFFSET : 8B,上个属性中的LSN值在redo日志文件组中的偏移量
  • LOG_CHECKPOINT_LOG_BUF_SIZE : 8B,服务器在做checkpoint操作时对应的logbuffer的大小
  • LOG_BLOCK_CHECKSUM : 4B,本block的校验值

在这里插入图片描述

第四个blockcheckpoint2checkpoint1中的结构属性相同,因此不做赘述。

3、源码解析

3.1 日志文件组结构体

struct log_group_t{
	/** log group identifier (always 0) */
	ulint				id;
	/** number of files in the group */
	ulint				n_files;
	/** format of the redo log: e.g., LOG_HEADER_FORMAT_CURRENT */
	ulint				format;
	/** individual log file size in bytes, including the header */
	lsn_t				file_size
	/** file space which implements the log group */;
	ulint				space_id;
	/** corruption status */
	log_group_state_t		state;
	/** lsn used to fix coordinates within the log group */
	lsn_t				lsn;
	/** the byte offset of the above lsn */
	lsn_t				lsn_offset;
	/** unaligned buffers */
	byte**				file_header_bufs_ptr;
	/** buffers for each file header in the group */
	byte**				file_header_bufs;

	/** used only in recovery: recovery scan succeeded up to this
	lsn in this log group */
	lsn_t				scanned_lsn;
	/** unaligned checkpoint header */
	byte*				checkpoint_buf_ptr;
	/** buffer for writing a checkpoint header */
	byte*				checkpoint_buf;
	/** list of log groups */
	UT_LIST_NODE_T(log_group_t)	log_groups;
};

日志文件组结构体中包含了文件组id、文件数量、所包含redo日志文件的类型等基本信息。另外,结构体中还有几个较为重要的字段:

  • lsn : 日志组中用于确定位置的lsn
  • lsn_offset : lsn的字节偏移量
  • checkpoint_buf : 用于写入检查点头部的缓冲区
  • UT_LIST_NODE_T(log_group_t) log_groups : 将日志组链接起来的链表

3.2 刷盘时机

理想状态下,事务一提交就会进行刷盘操作,但实际上,刷盘的时机是根据策略来进行的。
InnoDB存储引擎为redo log的刷盘策略提供了innodb_flush_log_at_trx_commit参数,它支持三种策略:

  • 0 :设置为 0 的时候,表示每次事务提交时不进行刷盘操作
  • 1 :设置为 1 的时候,表示每次事务提交时都将进行刷盘操作(默认值)
  • 2 :设置为 2 的时候,表示每次事务提交时都只把redo log buffer内容写入page cache
  • innodb_flush_log_at_trx_commit 参数默认为 1 ,也就是说当事务提交时会调用 fsyncredo log 进行刷盘
  • 另外,InnoDB 存储引擎有一个后台线程,每隔1 秒,就会把 redo log buffer 中的内容写到文件系统缓存(page cache),然后调用 fsync 刷盘。

3.2.1 提交刷盘

提交刷盘主要涉及到两个关键函数:log_buffer_flush_to_disk()log_write_up_to()

void
log_buffer_flush_to_disk(
	bool sync)
{
	ut_ad(!srv_read_only_mode);
	log_write_up_to(log_get_lsn(), sync);
}
  • 这段代码的功能是调用log_write_up_to()函数将日志缓冲区中的内容写入到磁盘上的日志文件中,直到最新的日志记录,并根据参数决定是否同时将日志数据刷新到磁盘。
  • 是否进行刷盘操作主要通过sync参数控制:synctrue,则进行刷盘操作;否则只写入操作系统的缓存。默认情况下,bool sync = true
void
log_write_up_to(
	lsn_t	lsn,
	bool	flush_to_disk)
{
	byte*           write_buf;
	lsn_t           write_lsn;

	ut_ad(!srv_read_only_mode);

	if (recv_no_ibuf_operations) {

		return;
	}

loop:
	ut_ad(++loop_count < 128);

	log_write_mutex_enter();
	ut_ad(!recv_no_log_write);

	lsn_t	limit_lsn = flush_to_disk
		? log_sys->flushed_to_disk_lsn
		: log_sys->write_lsn;

	if (limit_lsn >= lsn) {
		log_write_mutex_exit();
		return;
	}

#ifdef _WIN32
# ifndef UNIV_HOTBACKUP
	/* write requests during fil_flush() might not be good for Windows */
	if (log_sys->n_pending_flushes > 0
	    || !os_event_is_set(log_sys->flush_event)) {
		log_write_mutex_exit();
		os_event_wait(log_sys->flush_event);
		goto loop;
	}
# else
	if (log_sys->n_pending_flushes > 0) {
		goto loop;
	}
# endif  /* !UNIV_HOTBACKUP */
#endif /* _WIN32 */

	if (flush_to_disk
	    && (log_sys->n_pending_flushes > 0
		|| !os_event_is_set(log_sys->flush_event))) {

		/* Figure out if the current flush will do the job
		for us. */
		bool work_done = log_sys->current_flush_lsn >= lsn;

		log_write_mutex_exit();

		os_event_wait(log_sys->flush_event);

		if (work_done) {
			return;
		} else {
			goto loop;
		}
	}

	log_mutex_enter();
	if (!flush_to_disk
	    && log_sys->buf_free == log_sys->buf_next_to_write) {
		/* Nothing to write and no flush to disk requested */
		log_mutex_exit_all();
		return;
	}

	log_group_t*	group;
	ulint		start_offset;
	ulint		end_offset;
	ulint		area_start;
	ulint		area_end;
	ulong		write_ahead_size = srv_log_write_ahead_size;
	ulint		pad_size;

	DBUG_PRINT("ib_log", ("write " LSN_PF " to " LSN_PF,
			      log_sys->write_lsn,
			      log_sys->lsn));

	if (flush_to_disk) {
		log_sys->n_pending_flushes++;
		log_sys->current_flush_lsn = log_sys->lsn;
		MONITOR_INC(MONITOR_PENDING_LOG_FLUSH);
		os_event_reset(log_sys->flush_event);

		if (log_sys->buf_free == log_sys->buf_next_to_write) {
			/* Nothing to write, flush only */
			log_mutex_exit_all();
			log_write_flush_to_disk_low();
			return;
		}
	}

	start_offset = log_sys->buf_next_to_write;
	end_offset = log_sys->buf_free;

	area_start = ut_calc_align_down(start_offset, OS_FILE_LOG_BLOCK_SIZE);
	area_end = ut_calc_align(end_offset, OS_FILE_LOG_BLOCK_SIZE);

	ut_ad(area_end - area_start > 0);

	log_block_set_flush_bit(log_sys->buf + area_start, TRUE);
	log_block_set_checkpoint_no(
		log_sys->buf + area_end - OS_FILE_LOG_BLOCK_SIZE,
		log_sys->next_checkpoint_no);

	write_lsn = log_sys->lsn;
	write_buf = log_sys->buf;

	log_buffer_switch();

	group = UT_LIST_GET_FIRST(log_sys->log_groups);

	log_group_set_fields(group, log_sys->write_lsn);

	log_mutex_exit();

	/* Calculate pad_size if needed. */
	pad_size = 0;
	if (write_ahead_size > OS_FILE_LOG_BLOCK_SIZE) {
		lsn_t	end_offset;
		ulint	end_offset_in_unit;

		end_offset = log_group_calc_lsn_offset(
			ut_uint64_align_up(write_lsn,
					   OS_FILE_LOG_BLOCK_SIZE),
			group);
		end_offset_in_unit = (ulint) (end_offset % write_ahead_size);

		if (end_offset_in_unit > 0
		    && (area_end - area_start) > end_offset_in_unit) {
			/* The first block in the unit was initialized
			after the last writing.
			Needs to be written padded data once. */
			pad_size = write_ahead_size - end_offset_in_unit;

			if (area_end + pad_size > log_sys->buf_size) {
				pad_size = log_sys->buf_size - area_end;
			}

			::memset(write_buf + area_end, 0, pad_size);
		}
	}

	/* Do the write to the log files */
	log_group_write_buf(
		group, write_buf + area_start,
		area_end - area_start + pad_size,
		ut_uint64_align_down(log_sys->write_lsn,
				     OS_FILE_LOG_BLOCK_SIZE),
		start_offset - area_start);

	srv_stats.log_padded.add(pad_size);

	log_sys->write_lsn = write_lsn;

#ifndef _WIN32
	if (srv_unix_file_flush_method == SRV_UNIX_O_DSYNC) {
		/* O_SYNC means the OS did not buffer the log file at all:
		so we have also flushed to disk what we have written */
		log_sys->flushed_to_disk_lsn = log_sys->write_lsn;
	}
#endif /* !_WIN32 */

	log_write_mutex_exit();

	if (flush_to_disk) {
		log_write_flush_to_disk_low();
	}
}
  • 该函数负责将日志数据从内存缓冲区写入到磁盘上的日志文件,并根据需要决定是否将这些数据同步(刷新)到磁盘。这个函数是数据库日志管理系统的核心部分。

1、前期准备

byte*           write_buf;
lsn_t           write_lsn;

ut_ad(!srv_read_only_mode);

if (recv_no_ibuf_operations) {

	return;
}

loop:
	ut_ad(++loop_count < 128);
  • 初始化变量:write_bufwrite_lsn,分别用于记录写入日志偏移量和写入日志lsn
  • 断言检查:确保服务器不在只读模式下运行
  • 如果recv_no_ibuf_operations为真,表示正在运行恢复操作,此时不允许对日志文件进行操作,函数直接返回
  • 断言检查循环次数,防止无限循环

2、条件判断

	log_write_mutex_enter();
	ut_ad(!recv_no_log_write);

	lsn_t	limit_lsn = flush_to_disk
		? log_sys->flushed_to_disk_lsn
		: log_sys->write_lsn;

	if (limit_lsn >= lsn) {
		log_write_mutex_exit();
		return;
	}
  • 如果limit_lsn大于或等于目标lsn,则直接退出互斥锁并返回,因为所需的日志已经写入或刷新到指定位置。
if (log_sys->n_pending_flushes > 0
	|| !os_event_is_set(log_sys->flush_event)) {
	log_write_mutex_exit();
	os_event_wait(log_sys->flush_event);
	goto loop;
}
  • 检查是否有挂起的刷新操作或flush_event事件未设置。是的话,退出锁,等待flush_event,并进入循环。
if (flush_to_disk
	&& (log_sys->n_pending_flushes > 0
	|| !os_event_is_set(log_sys->flush_event))) {

	/* Figure out if the current flush will do the job
	for us. */
	bool work_done = log_sys->current_flush_lsn >= lsn;

	log_write_mutex_exit();

	os_event_wait(log_sys->flush_event);

	if (work_done) {
		return;
	} else {
		goto loop;
	}
}

log_mutex_enter();
if (!flush_to_disk
	&& log_sys->buf_free == log_sys->buf_next_to_write) {
	/* Nothing to write and no flush to disk requested */
	log_mutex_exit_all();
	return;
}
  • 如果需要刷新到磁盘且有挂起的刷新操作或flush_event未触发,则执行以下操作
  • 定义了一个布尔变量work_done,用于检查当前的刷新操作是否已经包含了目标日志序列号lsn。
  • 如果current_flush_lsn大于或等于lsn,则work_done为真,那么函数直接返回。如果work_done不为真,则跳转到loop处循环
  • 接下来,代码进入另一个互斥锁,检查是否不需要刷新到磁盘且日志缓冲区没有待写入的内容。如果这两个条件都满足,则退出所有相关的互斥锁并返回。

3、日志写入准备

log_group_t*	group;  // 指向日志组的指针
ulint		start_offset;  // 日志缓冲区中待写入数据的起始偏移量
ulint		end_offset;    // 日志缓冲区中待写入数据的结束偏移量
ulint		area_start;    // 对齐后的待写入区域的起始偏移量
ulint		area_end;      // 对齐后的待写入区域的结束偏移量
ulong		write_ahead_size = srv_log_write_ahead_size;   // 写前日志大小
ulint		pad_size;   // 填充大小

DBUG_PRINT("ib_log", ("write " LSN_PF " to " LSN_PF,
				log_sys->write_lsn,
				log_sys->lsn));

if (flush_to_disk) {
	log_sys->n_pending_flushes++;   // 增加挂起的刷新操作计数
	log_sys->current_flush_lsn = log_sys->lsn;  // 记录当前正在刷新的日志序列号
	MONITOR_INC(MONITOR_PENDING_LOG_FLUSH);
	os_event_reset(log_sys->flush_event);

	if (log_sys->buf_free == log_sys->buf_next_to_write) {
		/* Nothing to write, flush only */
		log_mutex_exit_all();
		log_write_flush_to_disk_low();
		return;
	}
}

/* 计算待写入数据的起始和结束偏移量,以及对齐偏移量 */
start_offset = log_sys->buf_next_to_write;
end_offset = log_sys->buf_free;

area_start = ut_calc_align_down(start_offset, OS_FILE_LOG_BLOCK_SIZE);
area_end = ut_calc_align(end_offset, OS_FILE_LOG_BLOCK_SIZE);

ut_ad(area_end - area_start > 0);

log_block_set_flush_bit(log_sys->buf + area_start, TRUE);
log_block_set_checkpoint_no(
	log_sys->buf + area_end - OS_FILE_LOG_BLOCK_SIZE,
	log_sys->next_checkpoint_no);

write_lsn = log_sys->lsn;
write_buf = log_sys->buf;

log_buffer_switch();   // 切换缓冲区

group = UT_LIST_GET_FIRST(log_sys->log_groups);  // 从日志系统的日志组列表中获取第一个日志组

log_group_set_fields(group, log_sys->write_lsn);  // 设置日志组的字段,如当前的写入日志序列号

log_mutex_exit();

/* Calculate pad_size if needed. */
pad_size = 0;
/* 如果写前日志大小大于操作系统日志块大小,则需要计算填充大小 */
if (write_ahead_size > OS_FILE_LOG_BLOCK_SIZE) {
	lsn_t	end_offset;
	ulint	end_offset_in_unit;

	end_offset = log_group_calc_lsn_offset(
		ut_uint64_align_up(write_lsn,
					OS_FILE_LOG_BLOCK_SIZE),
		group);
	end_offset_in_unit = (ulint) (end_offset % write_ahead_size);

	if (end_offset_in_unit > 0
		&& (area_end - area_start) > end_offset_in_unit) {
		/* The first block in the unit was initialized
		after the last writing.
		Needs to be written padded data once. */
		pad_size = write_ahead_size - end_offset_in_unit;

		if (area_end + pad_size > log_sys->buf_size) {
			pad_size = log_sys->buf_size - area_end;
		}

		::memset(write_buf + area_end, 0, pad_size);
	}
}
  • 这段代码的目的是在确保日志数据正确写入和刷新到磁盘的同时,处理对齐和填充等细节问题。通过使用互斥锁和条件判断,实现了在多线程环境下对日志系统的安全访问和同步。

4、写入日志文件

/* 这行代码将缓冲区中从area_start到area_end + pad_size的数据写入到指定的日志组中。
 * 写入的起始日志序列号是对log_sys->write_lsn向下对齐到OS_FILE_LOG_BLOCK_SIZE的结果,而偏移量则是start_offset - area_start,表示数据在缓冲区中的相对位置。 */
log_group_write_buf(
	group, write_buf + area_start,
	area_end - area_start + pad_size,
	ut_uint64_align_down(log_sys->write_lsn,
					OS_FILE_LOG_BLOCK_SIZE),
	start_offset - area_start);

srv_stats.log_padded.add(pad_size);  // 将填充大小pad_size添加到日志系统的填充统计中,有助于监控和调试日志写入过程中的填充情况。

log_sys->write_lsn = write_lsn;  // 更新日志系统的写入序列号write_lsn

/* 在非windowa平台上,将已刷新到磁盘的序列号flushed_to_disk_lsn更新为当前的写入序列号write_lsn */
#ifndef _WIN32
	if (srv_unix_file_flush_method == SRV_UNIX_O_DSYNC) {
		log_sys->flushed_to_disk_lsn = log_sys->write_lsn;
	}
#endif /* !_WIN32 */

log_write_mutex_exit();

if (flush_to_disk) {
	log_write_flush_to_disk_low();  // 如果flush_to_disk为真,则调用log_write_flush_to_disk_low函数将日志数据真正刷新到磁盘上。这是确保数据持久化的关键步骤。
}
  • 这段代码是日志写入流程中的一部分,负责将数据写入日志组、更新统计信息和序列号、根据平台特性可能更新已刷新到磁盘的序列号,并最终释放互斥锁和(如果需要)将数据刷新到磁盘。
根据日志获取服务不可用阶段Build 湘雅三GCP(生产环境) - dockerbuild - Default Job #83 (GCPXY3-DOC-JOB1-83) started building on agent Local Agent1, bamboo version: 8.1.1 simple 12-Aug-2025 17:57:36 Local Agent1 simple 12-Aug-2025 17:57:36 Build working directory is /var/atlassian/application-data/bamboo/local-working-dir/622593/GCPXY3-DOC-JOB1 simple 12-Aug-2025 17:57:36 Executing build 湘雅三GCP(生产环境) - dockerbuild - Default Job #83 (GCPXY3-DOC-JOB1-83) simple 12-Aug-2025 17:57:36 Running pre-build action: VCS Version Collector simple 12-Aug-2025 17:57:36 Running pre-build action: Build Log Labeller Pre Build Action command 12-Aug-2025 17:57:36 Substituting variable: ${bamboo.prod_xy3} with 119.91.104.191 simple 12-Aug-2025 17:57:36 Starting task &#39;SSH Task&#39; of type &#39;com.atlassian.bamboo.plugins.bamboo-scp-plugin:sshtask&#39; simple 12-Aug-2025 17:57:36 Connecting to 119.91.104.191 on port: 2022 simple 12-Aug-2025 17:57:37 Executing [ simple 12-Aug-2025 17:57:37 cd /opt/xy3/binaries/ simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-web-prod.yml up -d --build --force-recreate ghc-web simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-auth-prod simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-gateway-prod simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-system-prod simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-irbs-prod simple 12-Aug-2025 17:57:37 simple 12-Aug-2025 17:57:37 docker system prune -f simple 12-Aug-2025 17:57:37 ] build 12-Aug-2025 18:00:09 Step 1/6 : FROM java:8 build 12-Aug-2025 18:00:09 ---> d23bdf5b1b1b build 12-Aug-2025 18:00:09 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 65ebb77cad09 build 12-Aug-2025 18:00:09 Step 3/6 : ARG version build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 227aa09399d2 build 12-Aug-2025 18:00:09 Step 4/6 : ARG app build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 81b3499de399 build 12-Aug-2025 18:00:09 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 1e7dd4b95377 build 12-Aug-2025 18:00:09 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:00:15 ---> cab7592fa7f9 build 12-Aug-2025 18:00:15 build 12-Aug-2025 18:00:17 Successfully built cab7592fa7f9 build 12-Aug-2025 18:00:17 Successfully tagged ctms-auth-prod:1.0.0 build 12-Aug-2025 18:02:14 Step 1/6 : FROM java:8 build 12-Aug-2025 18:02:15 ---> d23bdf5b1b1b build 12-Aug-2025 18:02:15 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:02:15 ---> Using cache build 12-Aug-2025 18:02:15 ---> 65ebb77cad09 build 12-Aug-2025 18:02:15 Step 3/6 : ARG version build 12-Aug-2025 18:02:15 ---> Using cache build 12-Aug-2025 18:02:15 ---> 227aa09399d2 build 12-Aug-2025 18:02:15 Step 4/6 : ARG app build 12-Aug-2025 18:02:15 ---> Using cache build 12-Aug-2025 18:02:15 ---> 81b3499de399 build 12-Aug-2025 18:02:15 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:02:16 ---> Using cache build 12-Aug-2025 18:02:16 ---> 9036f99d9220 build 12-Aug-2025 18:02:16 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:02:29 ---> 7cdc7da77ff7 build 12-Aug-2025 18:02:29 build 12-Aug-2025 18:02:30 Successfully built 7cdc7da77ff7 build 12-Aug-2025 18:02:30 Successfully tagged ctms-gateway-prod:1.0.0 build 12-Aug-2025 18:04:17 Step 1/6 : FROM java:8 build 12-Aug-2025 18:04:18 ---> d23bdf5b1b1b build 12-Aug-2025 18:04:18 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> 65ebb77cad09 build 12-Aug-2025 18:04:18 Step 3/6 : ARG version build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> 227aa09399d2 build 12-Aug-2025 18:04:18 Step 4/6 : ARG app build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> 81b3499de399 build 12-Aug-2025 18:04:18 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> aa994c85a058 build 12-Aug-2025 18:04:18 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:04:27 ---> 52ea93d68a7c build 12-Aug-2025 18:04:27 build 12-Aug-2025 18:04:28 Successfully built 52ea93d68a7c build 12-Aug-2025 18:04:28 Successfully tagged ctms-system-prod:1.0.0 build 12-Aug-2025 18:06:09 Step 1/6 : FROM java:8 build 12-Aug-2025 18:06:10 ---> d23bdf5b1b1b build 12-Aug-2025 18:06:10 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:06:10 ---> Using cache build 12-Aug-2025 18:06:10 ---> 65ebb77cad09 build 12-Aug-2025 18:06:10 Step 3/6 : ARG version build 12-Aug-2025 18:06:10 ---> Using cache build 12-Aug-2025 18:06:10 ---> 227aa09399d2 build 12-Aug-2025 18:06:10 Step 4/6 : ARG app build 12-Aug-2025 18:06:11 ---> Using cache build 12-Aug-2025 18:06:11 ---> 81b3499de399 build 12-Aug-2025 18:06:11 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:06:12 ---> Using cache build 12-Aug-2025 18:06:12 ---> 06af45c2f79d build 12-Aug-2025 18:06:12 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:06:23 ---> d28fed9c95a3 build 12-Aug-2025 18:06:23 build 12-Aug-2025 18:06:24 Successfully built d28fed9c95a3 build 12-Aug-2025 18:06:24 Successfully tagged ctms-irbs-prod:1.0.0 build 12-Aug-2025 18:06:26 Deleted Images: build 12-Aug-2025 18:06:26 deleted: sha256:b5ba06737e2e551082b8096fbc83ce3675dac3ccfd413b9fe1b15121fbf97cf9 build 12-Aug-2025 18:06:26 deleted: sha256:35c76628a0ea6c92bf3f1e23a65d39b65cb9cfaf83af57df5293573bff5b944f build 12-Aug-2025 18:06:26 deleted: sha256:904ebc147f8ad93396e7ee55cb978baf913c7a9934b2249d22da7c798f15f5c1 build 12-Aug-2025 18:06:26 deleted: sha256:6587fc1c0e72fe65af594441b34436d18df5546688aa564f1419dfa50b022112 build 12-Aug-2025 18:06:26 deleted: sha256:1c8813bff1f6ce731675cf4229211dd4bf3421e7548792cf7657591a970494ac build 12-Aug-2025 18:06:26 deleted: sha256:16b2e7700101600360923d8b05101ff56e06d8da0928ef41edbf2b3f7a68c30c build 12-Aug-2025 18:06:26 deleted: sha256:943137eae2d76ad2baa2753da688278509d17f3b13c36b819dd66126db3db594 build 12-Aug-2025 18:06:26 deleted: sha256:ad52f9ba3affa27cdfc761c46ee7233344dae05480051f6f00da31088f608174 build 12-Aug-2025 18:06:26 build 12-Aug-2025 18:06:26 Total reclaimed space: 24.83MB error 12-Aug-2025 18:06:26 Found orphan containers (ghc-nacos, ctms-irbs-prod, ghc-minio, ctms-auth-prod, ctms-system-prod, ctms-gateway-prod, ghc-mysql, ghc-mysql2, ghc-redis) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Recreating ghc-web ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ghc-web ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-mysql, ghc-minio, ghc-web, ghc-mysql2, ghc-nacos, ghc-redis) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-auth-prod error 12-Aug-2025 18:06:26 Recreating ctms-auth-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-auth-prod ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-mysql, ghc-web, ghc-mysql2, ghc-nacos, ghc-minio, ghc-redis) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-gateway-prod error 12-Aug-2025 18:06:26 Recreating ctms-gateway-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-gateway-prod ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-redis, ghc-nacos, ghc-web, ghc-mysql2, ghc-mysql, ghc-minio) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-system-prod error 12-Aug-2025 18:06:26 Recreating ctms-system-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-system-prod ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-mysql2, ghc-redis, ghc-nacos, ghc-mysql, ghc-minio, ghc-web) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-irbs-prod error 12-Aug-2025 18:06:26 Recreating ctms-irbs-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-irbs-prod ... done error 12-Aug-2025 18:06:26  simple 12-Aug-2025 18:06:26 [ simple 12-Aug-2025 18:06:26 cd /opt/xy3/binaries/ simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-web-prod.yml up -d --build --force-recreate ghc-web simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-auth-prod simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-gateway-prod simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-system-prod simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-irbs-prod simple 12-Aug-2025 18:06:26 simple 12-Aug-2025 18:06:26 docker system prune -f simple 12-Aug-2025 18:06:26 ] has finished. simple 12-Aug-2025 18:06:26 Result: exit code = 0 simple 12-Aug-2025 18:06:26 Finished task &#39;SSH Task&#39; with result: Success simple 12-Aug-2025 18:06:26 Running post build plugin &#39;NCover Results Collector&#39; simple 12-Aug-2025 18:06:26 Running post build plugin &#39;Artifact Copier&#39; simple 12-Aug-2025 18:06:26 Running post build plugin &#39;npm Cache Cleanup&#39; simple 12-Aug-2025 18:06:26 Running post build plugin &#39;Build Results Label Collector&#39; simple 12-Aug-2025 18:06:26 Running post build plugin &#39;Clover Results Collector&#39; simple 12-Aug-2025 18:06:26 Running post build plugin &#39;Docker Container Cleanup&#39; simple 12-Aug-2025 18:06:26 Finalising the build... simple 12-Aug-2025 18:06:26 Stopping timer. simple 12-Aug-2025 18:06:26 Build GCPXY3-DOC-JOB1-83 completed. simple 12-Aug-2025 18:06:26 Running on server: post build plugin &#39;NCover Results Collector&#39; simple 12-Aug-2025 18:06:26 Running on server: post build plugin &#39;Build Hanging Detection Configuration&#39; simple 12-Aug-2025 18:06:26 Running on server: post build plugin &#39;Build Labeller&#39; simple 12-Aug-2025 18:06:26 Running on server: post build plugin &#39;Clover Delta Calculator&#39; simple 12-Aug-2025 18:06:26 Running on server: post build plugin &#39;Maven Dependencies Postprocessor&#39; simple 12-Aug-2025 18:06:26 All post build plugins have finished simple 12-Aug-2025 18:06:26 Generating build results summary... simple 12-Aug-2025 18:06:26 Saving build results to disk... simple 12-Aug-2025 18:06:26 Store variable context... simple 12-Aug-2025 18:06:26 Indexing build results... simple 12-Aug-2025 18:06:26 Finished building GCPXY3-DOC-JOB1-83.
08-14
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值