Audit Records

本文介绍了创建审计记录时需要遵循的标准格式,包括三个关键字段:AUDIT_OPRID、AUDIT_STAMP 和 AUDIT_ACTN。这些字段确保了审计记录的唯一性和准确性。此外还提到了如何设置自动更新日期时间戳以及在需要时添加 AUDIT_RECNAME 字段来指明审计动作所关联的具体记录。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

If you are creating an audit record, there are normally three fields that need to go at the start of the audit record (remember to use the AUDIT_ prefix as described in the record naming conventions). These three fields will uniquely identify your audit data. They are (in the order they should be put on the audit record):

  1. AUDIT_OPRID
  2. AUDIT_STAMP
  3. AUDIT_ACTN

The audit stamp field is a date/time field. However in order for it to be correctly populated, make sure that you set it to Auto-Update so that it is automatically set to the correct date/time stamp.

If you are using the same audit record for multiple records then add the field AUDIT_RECNAME. Ensure that this is system maintained and this will tell you which record the audit action relates to.

Further Information

if self.config.load_type == "INC": # adhoc hist job do not need to join landing merge table try: landing_merge_df = self.spark.read.format(self.config.destination_file_type). \ load(self.config.destination_data_path) # dataframe for updated records df = df.drop("audit_batch_id", "audit_job_id", "audit_src_sys_name", "audit_created_usr", "audit_updated_usr", "audit_created_tmstmp", "audit_updated_tmstmp") # dataframe for newly inserted records new_insert_df = df.join(landing_merge_df, primary_keys_list, "left_anti") self.logger.info(f"new_insert_df count: {new_insert_df.count()}") new_insert_df = DataSink_with_audit(self.spark).add_audit_columns(new_insert_df, param_dict) update_df = df.alias('l').join(landing_merge_df.alias('lm'), on=primary_keys_list, how="inner") update_df = update_df.select("l.*", "lm.audit_batch_id", "lm.audit_job_id", "lm.audit_src_sys_name", "lm.audit_created_usr", "lm.audit_updated_usr", "lm.audit_created_tmstmp", "lm.audit_updated_tmstmp") self.logger.info(f"update_df count : {update_df.count()}") update_df = DataSink_with_audit(self.spark).update_audit_columns(update_df, param_dict) # dataframe for unchanged records unchanged_df = landing_merge_df.join(df, on=primary_keys_list, how="left_anti") self.logger.info(f"unchanged_records_df count : {unchanged_df.count()}") final_df = new_insert_df.union(update_df).union(unchanged_df) print("final_df count : ", final_df.count()) except AnalysisException as e: if e.desc.startswith('Path does not exist'): self.logger.info('landing merge table not exists. will skip join landing merge') final_df = DataSink_with_audit(self.spark).add_audit_columns(df, param_dict) else: self.logger.error(f'unknown error: {e.desc}') raise e else: final_df = DataSink_with_audit(self.spark).add_audit_columns(df, param_dict) return final_df
06-11
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值