v$tempfile与v$sort_usage之关系解析

在前面的V$TEMPSEG_USAGE与Oracle排序中我谈到V$TEMPSEG_USAGE和V$SORT_USAGE同源,其中的SEGFILE#代表的是绝对文件号(AFN).

那么对于临时表空间的临时文件来说,这个字段可以和什么字段进行关联呢?

我们再来看一下V$TEMPFILE的来源,V$TEMPFILE由如下语句创建:

SELECT tf.inst_id, tf.tfnum, TO_NUMBER (tf.tfcrc_scn),
       TO_DATE (tf.tfcrc_tim, 'MM/DD/RR HH24:MI:SS', 'NLS_CALENDAR=Gregorian'),
       tf.tftsn, tf.tfrfn,
       DECODE (BITAND (tf.tfsta, 2), 0, 'OFFLINE', 2, 'ONLINE', 'UNKNOWN'),
       DECODE (BITAND (tf.tfsta, 12),
               0, 'DISABLED',
               4, 'READ ONLY',
               12, 'READ WRITE',
               'UNKNOWN'
              ),
       fh.fhtmpfsz * tf.tfbsz, fh.fhtmpfsz, tf.tfcsz * tf.tfbsz, tf.tfbsz,
       fn.fnnam
  FROM x$kcctf tf, x$kccfn fn, x$kcvfhtmp fh
 WHERE fn.fnfno = tf.tfnum
   AND fn.fnfno = fh.htmpxfil
   AND tf.tffnh = fn.fnnum
   AND tf.tfdup != 0
   AND fn.fntyp = 7
   AND fn.fnnam IS NOT NULL

考察x$kcctf底层表,我们注意到TFAFN(temp file absolute file number)在这里存在:

SQL> desc x$kcctf
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ADDR                                   RAW(4)
 INDX                                   NUMBER
 INST_ID                                NUMBER
 TFNUM                                  NUMBER
 TFAFN                                  NUMBER
 TFCSZ                                  NUMBER
 TFBSZ                                  NUMBER
 TFSTA                                  NUMBER
 TFCRC_SCN                              VARCHAR2(16)
 TFCRC_TIM                              VARCHAR2(20)
 TFFNH                                  NUMBER
 TFFNT                                  NUMBER
 TFDUP                                  NUMBER
 TFTSN                                  NUMBER
 TFTSI                                  NUMBER
 TFRFN                                  NUMBER
 TFPFT                                  NUMBER

而这个字段在构建v$tempfile时并未出现,所以我们不能通过v$sort_usage和v$tempfile直接关联绝对文件号.

通过LOB对象与临时段一文中方法我们可以简单构建一个排序段使用,然后来研究一下: 

SQL> select username,segtype,segfile#,segblk#,extents,segrfno#
  2  from v$sort_usage;
USERNAME SEGTYPE     SEGFILE#    SEGBLK#    EXTENTS   SEGRFNO#
-------- --------- ---------- ---------- ---------- ----------
SYS      LOB_DATA           9      18953          1          1

我们看到这里的SEGFILE#=9,而在v$tempfile是找不到这个信息的:

SQL> select file#,rfile#,ts#,status,blocks
  2  from v$tempfile;
     FILE#     RFILE#        TS# STATUS      BLOCKS
---------- ---------- ---------- ------- ----------
         1          1          2 ONLINE       38400

我们可以从x$kcctf中获得这些信息,我们可以看到v$tempfile.file#实际上来自x$kcctf.tfnum,实际上是临时文件的顺序号,而绝对文件号是x$kcctf.tfafn,这个才可以和v$sort_usage.segfile#关联: 

SQL> select indx,tfnum,tfafn,tfcsz       
  2  from x$kcctf;
      INDX      TFNUM      TFAFN      TFCSZ
---------- ---------- ---------- ----------
         0          1          9      38400
         1          2         10      12800

临时表空间的绝对文件号可以通过如下查询获得:

 

SQL> select tm.file# Fnum ,tf.tfafn AFN,tm.name FName
  2  from v$tempfile tm,x$kcctf tf
  3  where tm.file# = tf.tfnum;
      FNUM        AFN FNAME
---------- ---------- --------------------------------------------
         1          9 /opt/oracle/oradata/conner/temp1.dbf
         4         12 /opt/oracle/oradata/conner/temp2.dbf
 

至于其他就不再赘述。 

#!/bin/bash rm -rf buildroot/output/rockchip_rv1126b/build/deviceTest-1.0/ if [ -z "$BASH_SOURCE" ]; then echo "Not in bash, switching to it..." case "${@:-shell}" in shell) ./build.sh shell ;; *) ./build.sh $@ bash ;; esac fi usage_clean() { usage_oneline "cleanall" "cleanup all" for s in $(grep -rwl clean_hook "$RK_CHIP_SCRIPTS_DIR" \ "$RK_SCRIPTS_DIR" 2>/dev/null | grep "/mk-" | \ sed "s/^.*mk-\(.*\).sh/\1/" | grep -v "^all$"); do usage_oneline "clean-$s" "cleanup $s" done } usage() { echo "Usage: $(basename $BASH_SOURCE) [OPTIONS]" echo "Available options:" run_build_hooks usage # Global options usage_clean usage_oneline "post-rootfs <rootfs dir>" "trigger post-rootfs hook scripts" usage_oneline "help" "display this information" echo "" echo "Default option is '$RK_DEFAULT_TARGET'." rm -f "$INITIAL_ENV" exit 0 } # Export global functions set -a rk_log() { LOG_COLOR="$1" shift if [ "$1" = "-n" ]; then shift LOG_FLAG="-ne" else LOG_FLAG="-e" fi echo $LOG_FLAG "\e[${LOG_COLOR}m$@\e[0m" } message() { rk_log 36 "$@" # light blue } notice() { rk_log 35 "$@" # purple } warning() { rk_log 34 "$@" # dark blue } error() { rk_log 91 "$@" # light red } fatal() { rk_log 31 "$@" # dark red } usage_oneline() { printf "%-40s%s\n" "$1" "${*:2}" } usage_makefile_oneline() { printf " %-22s - %s\n" "$(echo "$1" | grep -o "^[^[^:^ ]*")" "${*:2}" } finish_build() { notice "Running $(basename "${BASH_SOURCE[1]}") - ${@:-${FUNCNAME[1]}} succeeded." cd "$RK_SDK_DIR" } load_config() { [ -r "$RK_CONFIG" ] || return 0 for var in $@; do export "$(grep "^$var=" "$RK_CONFIG" | tr -d '"' || true)" \ &>/dev/null || true done } check_config() { unset missing for var in $@; do eval [ -z \"\$$var\" ] || continue missing="$missing $var" done [ "$missing" ] || return 0 notice "Skipping $(basename "${BASH_SOURCE[1]}") - ${FUNCNAME[1]} for missing configs: $missing." return 1 } kernel_version_raw() { [ -d kernel ] || return 0 VERSION_KEYS="VERSION PATCHLEVEL" VERSION="" for k in $VERSION_KEYS; do v=$(grep "^$k = " kernel/Makefile | cut -d' ' -f3) VERSION=${VERSION:+${VERSION}.}$v done echo $VERSION } kernel_version() { [ -d kernel ] || return 0 KERNEL_DIR="$(basename "$(realpath kernel)")" case "$KERNEL_DIR" in kernel-*) echo ${KERNEL_DIR#kernel-} return 0 ;; esac kernel_version_raw } start_log() { LOG_FILE="$RK_LOG_DIR/${2:-$1_$(date +%F_%H-%M-%S)}.log" ln -rsf "$LOG_FILE" "$RK_LOG_DIR/$1.log" echo "# $(date +"%F %T")" >> "$LOG_FILE" echo "$LOG_FILE" } get_toolchain() { MODULE="$1" TC_ARCH="${2/arm64/aarch64}" TC_VENDOR="${3-none}" TC_OS="${4:-linux}" MACHINE=$(uname -m) if [ "$MACHINE" != x86_64 ]; then notice "Using Non-x86 toolchain for $MODULE!" >&2 if [ "$TC_ARCH" = aarch64 -a "$MACHINE" != aarch64 ]; then echo aarch64-linux-gnu- elif [ "$TC_ARCH" = arm -a "$MACHINE" != armv7l ]; then echo arm-linux-gnueabihf- fi return 0 fi # RV1126 uses custom toolchain if [ "$RK_CHIP_FAMILY" = "rv1126_rv1109" ]; then TC_VENDOR=rockchip830 fi TC_DIR="$RK_SDK_DIR/prebuilts/gcc/linux-x86/$TC_ARCH" if [ "$TC_VENDOR" ]; then TC_PATTERN="$TC_ARCH-$TC_VENDOR-$TC_OS-[^-]*-gcc" else TC_PATTERN="$TC_ARCH-$TC_OS-[^-]*-gcc" fi GCC="$(find "$TC_DIR" -name "*gcc" | grep -m 1 "/$TC_PATTERN$" || true)" if [ ! -x "$GCC" ]; then { error "No prebuilt GCC toolchain for $MODULE!" error "Arch: $TC_ARCH" error "Vendor: $TC_VENDOR" error "OS: $TC_OS" } >&2 exit 1 fi echo ${GCC%gcc} } ensure_tools() { for t in "$@"; do if [ "$RK_ROOTFS_PREFER_PREBUILT_TOOLS" ] || \ [ "$RK_ROOTFS_PREBUILT_TOOLS" ] || \ [ ! -e "$t" ]; then install -v -D -m 0755 "$RK_TOOLS_DIR/armhf/${t##*/}" "$t" continue fi if [ ! -e "$t" ]; then warning "Unable to install $t!" fi done } # For developing shell only rroot() { cd "$RK_SDK_DIR" } rout() { cd "$RK_OUTDIR" } rcommon() { cd "$RK_COMMON_DIR" } rscript() { cd "$RK_SCRIPTS_DIR" } rchip() { cd "$(realpath "$RK_CHIP_DIR")" } set +a # End of global functions err_handler() { ret=${1:-$?} if [ "$ret" -eq 0 ]; then return 0 fi fatal "ERROR: Running $BASH_SOURCE - ${2:-${FUNCNAME[1]}} failed!" fatal "ERROR: exit code $ret from line ${BASH_LINENO[0]}:" fatal " ${3:-$BASH_COMMAND}" fatal "ERROR: call stack:" for i in $(seq 1 $((${#FUNCNAME[@]} - 1))); do SOURCE="${BASH_SOURCE[$i]}" LINE=${BASH_LINENO[$(( $i - 1 ))]} fatal " $(basename "$SOURCE"): ${FUNCNAME[$i]}($LINE)" done exit $ret } # option_check "<supported commands>" <option 1> [option 2] ... option_check() { CMDS="$1" shift for opt in $@; do for cmd in $CMDS; do # NOTE: There might be patterns in commands if echo "${opt%%:*}" | grep -qE "^$cmd$"; then return 0 fi done done return 1 } # hook_check <hook> <stage> <cmd> hook_check() { case "$2" in init | pre-build | build | post-build) ;; *) return 0 ;; esac SCRIPT="$(realpath "$1" --relative-to "$RK_SDK_DIR")" CMDS="$(sed -n \ "s@^RK_${2//-/_}_CMDS[^ ]*\(.*\)\" # $SCRIPT\$@\1@ip" \ "$RK_PARSED_CMDS")" if echo "$CMDS" | grep -wq default; then return 0 fi option_check "$CMDS" "$3" } # Run specific hook scripts do_run_hooks() { HOOK_DIR="$1" shift [ -d "$HOOK_DIR" ] || return 0 for hook in $(find "$HOOK_DIR" -maxdepth 1 -name "*.sh" | sort); do # Ignore unrelated hooks hook_check "$hook" "$@" || continue if "$hook" "$@"; then continue else HOOK_RET=$? err_handler $HOOK_RET \ "${FUNCNAME[0]} $*" "$hook $*" exit $HOOK_RET fi done } run_hooks() { case "${2:-usage}" in usage) do_run_hooks "$RK_COMMON_DIR/$1" "${@:2}" do_run_hooks "$RK_CHIP_DIR/$1" "${@:2}" ;; *) # Prefer chips' hooks than the common ones do_run_hooks "$RK_CHIP_DIR/$1" "${@:2}" do_run_hooks "$RK_COMMON_DIR/$1" "${@:2}" ;; esac } # Run build hook scripts for normal stages run_build_hooks() { # Don't log these stages (either interactive or with useless logs) case "$1" in init | pre-build | make-* | usage | parse-cmds) run_hooks "$RK_BUILD_HOOK_DIR" "$@" || true return 0 ;; esac LOG_FILE="$(start_log "$1")" echo -e "# run hook: $@\n" >> "$LOG_FILE" run_hooks "$RK_BUILD_HOOK_DIR" "$@" 2>&1 | tee -a "$LOG_FILE" HOOK_RET=${PIPESTATUS[0]} if [ $HOOK_RET -ne 0 ]; then err_handler $HOOK_RET "${FUNCNAME[0]} $*" "$@" exit $HOOK_RET fi } # Run post hook scripts for post-rootfs stage run_post_hooks() { LOG_FILE="$(start_log post-rootfs)" echo -e "# run hook: $@\n" >> "$LOG_FILE" run_hooks "$RK_POST_HOOK_DIR" "$@" 2>&1 | tee -a "$LOG_FILE" HOOK_RET=${PIPESTATUS[0]} if [ $HOOK_RET -ne 0 ]; then err_handler $HOOK_RET "${FUNCNAME[0]} $*" "$@" exit $HOOK_RET fi } setup_environments() { export LC_ALL=C export RK_SCRIPTS_DIR="$(dirname "$(realpath "$BASH_SOURCE")")" export RK_COMMON_DIR="$(realpath "$RK_SCRIPTS_DIR/..")" export RK_SDK_DIR="$(realpath "$RK_COMMON_DIR/../../..")" export RK_DEVICE_DIR="$RK_SDK_DIR/device/rockchip" export RK_CHIPS_DIR="$RK_DEVICE_DIR/.chips" export RK_CHIP_DIR="$RK_DEVICE_DIR/.chip" export RK_CHIP_SCRIPTS_DIR="$RK_CHIP_DIR/scripts" export RK_DEFAULT_TARGET="all" export RK_DATA_DIR="$RK_COMMON_DIR/data" export RK_TOOLS_DIR="$RK_COMMON_DIR/tools" export RK_EXTRA_PARTS_DIR="$RK_COMMON_DIR/extra-parts" export RK_KBUILD_DIR="$RK_COMMON_DIR/linux-kbuild" export RK_CONFIG_IN="$RK_COMMON_DIR/configs/Config.in" export RK_BUILD_HOOK_DIR="build-hooks" export RK_POST_HOOK_DIR="post-hooks" export RK_BUILD_HELPER="$RK_SCRIPTS_DIR/build-helper" export RK_POST_HELPER="$RK_SCRIPTS_DIR/post-helper" export RK_PARTITION_HELPER="$RK_SCRIPTS_DIR/partition-helper" export RK_OUTDIR="$RK_SDK_DIR/output" export RK_EXTRA_PART_OUTDIR="$RK_OUTDIR/extra-parts" export RK_SESSION_DIR="$RK_OUTDIR/sessions" export RK_SESSION="${RK_SESSION:-$(date +%F_%H-%M-%S)}" export RK_LOG_DIR="$RK_SESSION_DIR/$RK_SESSION" export RK_LOG_BASE_DIR="$RK_OUTDIR/log" export RK_ROCKDEV_DIR="$RK_SDK_DIR/rockdev" export RK_FIRMWARE_DIR="$RK_OUTDIR/firmware" export RK_CONFIG="$RK_OUTDIR/.config" export RK_DEFCONFIG_LINK="$RK_OUTDIR/defconfig" export RK_OWNER="$(stat --format %U "$RK_SDK_DIR")" export RK_OWNER_UID="$(stat --format %u "$RK_SDK_DIR")" RK_PARSED_CMDS="$RK_OUTDIR/.parsed_cmds" RK_MAKE_USAGE="$RK_OUTDIR/.make_usage" } check_sdk() { if ! echo "$RK_SCRIPTS_DIR" | \ grep -q "device/rockchip/common/scripts$"; then fatal "SDK corrupted!" error "Running $BASH_SOURCE from $RK_SCRIPTS_DIR:" ls --file-type "$RK_SCRIPTS_DIR" exit 1 fi "$RK_SCRIPTS_DIR/check-sdk.sh" # Detect sudo(root) unset RK_SUDO_ROOT if [ "$RK_OWNER_UID" -ne 0 ] && [ "${USER:-$(id -un)}" = "root" ]; then export RK_SUDO_ROOT=1 notice "Running within sudo(root) environment!" echo fi } parse_scripts() { mkdir -p "$RK_OUTDIR" if [ ! -r "$RK_MAKE_USAGE" ] || \ [ "$(find "$RK_SCRIPTS_DIR" "$RK_CHIP_DIR" "$RK_CHIP_DIR/" \ -cnewer "$RK_MAKE_USAGE" 2>/dev/null)" ]; then { TEMP_FILE=$(mktemp -u) for c in $(ls "$RK_CHIPS_DIR"); do usage_makefile_oneline "$c" "choose $c" done > $TEMP_FILE run_build_hooks make-usage >> $TEMP_FILE usage_clean | \ while read LINE; do usage_makefile_oneline $LINE done >> $TEMP_FILE mv $TEMP_FILE "$RK_MAKE_USAGE" }& fi if [ ! -r "$RK_PARSED_CMDS" ] || \ [ "$(find "$RK_SCRIPTS_DIR" "$RK_CHIP_DIR" "$RK_CHIP_DIR/" \ -cnewer "$RK_PARSED_CMDS" 2>/dev/null)" ]; then { TEMP_FILE=$(mktemp -u) { echo "#!/bin/bash" run_build_hooks parse-cmds } > $TEMP_FILE mv $TEMP_FILE "$RK_PARSED_CMDS" }& fi wait } makefile_options() { unset DEBUG setup_environments check_sdk >&2 || exit 1 parse_scripts case "$1" in make-usage) cat "$RK_MAKE_USAGE" ;; make-targets) cat "$RK_MAKE_USAGE" | awk '{print $1}' ;; esac exit 0 } main() { # Early handler of Makefile options case "$@" in make-*) makefile_options $@ ;; esac [ -z "$DEBUG" ] || set -x trap 'err_handler' ERR set -eE # Save intial envionments unset INITIAL_SESSION INITIAL_ENV=$(mktemp -u) env > "$INITIAL_ENV" [ "$RK_SESSION" ] || INITIAL_SESSION=1 # Setup basic environments setup_environments # Log SDK information MANIFEST="$RK_SDK_DIR/.repo/manifest.xml" if [ -e "$MANIFEST" ]; then if [ ! -L "$MANIFEST" ]; then MANIFEST="$RK_SDK_DIR/.repo/manifests/$(grep -o "[^\"]*\.xml" "$MANIFEST")" fi TAG="$(grep -o "linux-.*-gen-rkr[^.\"]*" "$MANIFEST" | \ head -n 1 || true)" MANIFEST="$(basename "$(realpath "$MANIFEST")")" notice "\n############### Rockchip Linux SDK ###############\n" notice "Manifest: $MANIFEST" if [ "$TAG" ]; then notice "Version: $TAG" fi echo fi notice -n "Log colors: " message -n "message " notice -n "notice " warning -n "warning " error -n "error " fatal "fatal" echo # Check SDK requirements check_sdk # Check for session validation if [ -z "$INITIAL_SESSION" ] && [ ! -d "$RK_LOG_DIR" ]; then warning "Session($RK_SESSION) is invalid!" export RK_SESSION="$(date +%F_%H-%M-%S)" export RK_LOG_DIR="$RK_SESSION_DIR/$RK_SESSION" INITIAL_SESSION=1 fi export RK_INITIAL_ENV="$RK_LOG_DIR/initial.env" export RK_CUSTOM_ENV="$RK_LOG_DIR/custom.env" export RK_FINAL_ENV="$RK_LOG_DIR/final.env" mkdir -p "$RK_FIRMWARE_DIR" cd "$RK_SDK_DIR" [ -f README.md ] || ln -rsf "$RK_COMMON_DIR/README.md" . [ -d common ] || ln -rsf "$RK_COMMON_DIR" . # TODO: Remove it in the repo manifest.xml rm -f envsetup.sh OPTIONS="${@:-$RK_DEFAULT_TARGET}" # Special handle for chip and defconfig # e.g. ./build.sh rk3588:rockchip_defconfig for opt in $OPTIONS; do if [ -d "$RK_CHIPS_DIR/${opt%%:*}" ]; then OPTIONS=$(echo "$OPTIONS" | xargs -n 1 | \ sed "s/^$opt$/chip:$opt/" | xargs) elif echo "$opt" | grep -q "^[0-9a-z_]*_defconfig$"; then OPTIONS=$(echo "$OPTIONS" | xargs -n 1 | \ sed "s/^$opt$/defconfig:$opt/" | xargs) fi done # Parse supported commands parse_scripts source "$RK_PARSED_CMDS" # For rpdzkj link init to chip option if [ "$OPTIONS" = "init" ]; then OPTIONS=chip fi # Options checking CMDS="$RK_INIT_CMDS $RK_PRE_BUILD_CMDS $RK_BUILD_CMDS \ $RK_POST_BUILD_CMDS" for opt in $OPTIONS; do case "$opt" in help | h | -h | --help | usage | \?) usage ;; clean-*) # Check cleanup module MODULE="$(echo ${opt#clean-})" grep -wq clean_hook \ "$RK_SCRIPTS_DIR/mk-$MODULE.sh" \ "$RK_CHIP_SCRIPTS_DIR/mk-$MODULE.sh" \ 2>/dev/null || usage ;& shell | buildroot-shell | bshell | cleanall) # Check single options if [ "$opt" = "$OPTIONS" ]; then break fi error "ERROR: $opt cannot combine with other options!" ;; post-rootfs) if [ "$opt" = "$1" -a -d "$2" ]; then # Hide args from later checks OPTIONS=$opt break fi error "ERROR: $opt should be the first option followed by rootfs dir!" ;; *) # Make sure that all options are handled if option_check "$CMDS" $opt; then continue fi error "ERROR: Unhandled option: $opt" ;; esac usage done # Prepare log dirs if [ ! -d "$RK_LOG_DIR" ]; then rm -rf "$RK_LOG_BASE_DIR" "$RK_LOG_DIR" "$RK_SESSION_DIR/latest" mkdir -p "$RK_LOG_DIR" ln -rsf "$RK_SESSION_DIR" "$RK_LOG_BASE_DIR" ln -rsf "$RK_LOG_DIR" "$RK_SESSION_DIR/latest" message "Log saved at $RK_LOG_DIR" fi # Drop old logs cd "$RK_LOG_BASE_DIR" rm -rf $(ls -t | sed '1,10d') cd "$RK_SDK_DIR" # Save initial envionments if [ "$INITIAL_SESSION" ]; then rm -f "$RK_INITIAL_ENV" cp "$INITIAL_ENV" "$RK_INITIAL_ENV" ln -rsf "$RK_INITIAL_ENV" "$RK_OUTDIR/" fi rm -f "$INITIAL_ENV" # Init stage (preparing SDK configs, etc.) run_build_hooks init $OPTIONS rm -f "$RK_OUTDIR/.tmpconfig*" # No need to go further CMDS="$RK_PRE_BUILD_CMDS $RK_BUILD_CMDS $RK_POST_BUILD_CMDS \ cleanall clean-.* post-rootfs" option_check "$CMDS" $OPTIONS || return 0 # Force exporting config environments set -a # Load config environments source "$RK_CONFIG" cp "$RK_CONFIG" "$RK_LOG_DIR" export RK_KERNEL_VERSION="$(kernel_version)" if [ -z "$INITIAL_SESSION" ]; then # Inherit session environments sed -n 's/^\(RK_.*=\)\(.*\)/\1"\2"/p' "$RK_FINAL_ENV" > \ "$INITIAL_ENV" source "$INITIAL_ENV" rm -f "$INITIAL_ENV" else # Detect and save custom environments # Find custom environments rm -f "$RK_CUSTOM_ENV" for cfg in $(grep "^RK_" "$RK_INITIAL_ENV" || true); do env | grep -q "^${cfg//\"/}$" || \ echo "$cfg" >> "$RK_CUSTOM_ENV" done # Allow custom environments overriding if [ -e "$RK_CUSTOM_ENV" ]; then ln -rsf "$RK_CUSTOM_ENV" "$RK_OUTDIR/" warning "WARN: Found custom environments:" cat "$RK_CUSTOM_ENV" warning "Assuming that is expected, please clear them if otherwise." read -t 10 -p "Press enter to continue." source "$RK_CUSTOM_ENV" if grep -q "^RK_KERNEL_VERSION=" "$RK_CUSTOM_ENV"; then warning "Custom RK_KERNEL_VERSION ignored!" fi if grep -q "^RK_ROOTFS_SYSTEM=" "$RK_CUSTOM_ENV"; then warning "Custom RK_ROOTFS_SYSTEM ignored!" load_config RK_ROOTFS_SYSTEM fi fi fi # Parse partition table source "$RK_PARTITION_HELPER" rk_partition_init set +a # The real kernel version: 4.4/4.19/5.10/6.1, etc. export RK_KERNEL_VERSION_RAW=$(kernel_version_raw) export RK_KERNEL_VERSION="$(kernel_version)" # Handle special commands case "$OPTIONS" in cleanall) run_build_hooks clean rm -rf "$RK_OUTDIR" "$RK_SDK_DIR/rockdev" finish_build cleanall exit 0 ;; clean-*) MODULE="$(echo ${OPTIONS#clean-})" if [ -x "$RK_CHIP_SCRIPTS_DIR/mk-$MODULE.sh" ]; then "$RK_CHIP_SCRIPTS_DIR/mk-$MODULE.sh" clean else "$RK_SCRIPTS_DIR/mk-$MODULE.sh" clean fi finish_build clean - $MODULE exit 0 ;; post-rootfs) shift touch "$RK_LOG_DIR/.stamp_post_start" run_post_hooks "$@" TARGET_DIR="$1" source "$RK_POST_HELPER" POST_DIR="$RK_OUTDIR/$POST_OS" mkdir -p "$POST_DIR" ln -rsf "$TARGET_DIR" "$POST_DIR/target" finish_build post-rootfs notice "Files changed in post-rootfs stage:" cd "$TARGET_DIR" find . \( -type f -o -type l \) \ -cnewer "$RK_LOG_DIR/.stamp_post_start" | \ tee "$POST_DIR/.files_post.txt" exit 0 ;; esac # Save final environments rm -f "$RK_FINAL_ENV" env > "$RK_FINAL_ENV" ln -rsf "$RK_FINAL_ENV" "$RK_OUTDIR/" # Log configs message "\n==========================================" message " Final configs" message "==========================================" env | grep -E "^RK_.*=.+" | grep -vE "PARTITION_[0-9]" | \ grep -vE "=\"\"$|_DEFAULT=y|^RK_DEFAULT_TARGET|CMDS=" | \ grep -vE "^RK_CONFIG|_BASE_CFG=|_LINK=|DIR=|_ENV=|_NAME=|_DTB=" | \ grep -vE "_HELPER=|_SUPPORTS=|_ARM64=|_ARM=|_HOST=" | \ grep -vE "^RK_ROOTFS_SYSTEM_|^RK_YOCTO_DISPLAY_PLATFORM_" | sort echo # Pre-build stage (submodule configuring, etc.) run_build_hooks pre-build $OPTIONS # Build stage (building, etc.) run_build_hooks build $OPTIONS # Post-build stage (firmware packing, etc.) run_build_hooks post-build $OPTIONS } if [ "$0" != "$BASH_SOURCE" ]; then # Sourced, executing it directly "$BASH_SOURCE" ${@:-shell} elif [ "$0" == "$BASH_SOURCE" ]; then # Executed directly main "${@%%:}" fi
最新发布
11-07
#!/usr/bin/perl #line 2 "C:\Strawberry\perl\site\bin\par.pl" eval 'exec /usr/bin/perl -S $0 ${1+"$@"}' if 0; # not running under some shell package __par_pl; # --- This script must not use any modules at compile time --- # use strict; #line 156 my ($PAR_MAGIC, $par_temp, $progname, @tmpfile, %ModuleCache); END { if ($ENV{PAR_CLEAN}) { require File::Temp; require File::Basename; require File::Spec; my $topdir = File::Basename::dirname($par_temp); outs(qq[Removing files in "$par_temp"]); File::Find::finddepth(sub { ( -d ) ? rmdir : unlink }, $par_temp); rmdir $par_temp; # Don't remove topdir because this causes a race with other apps # that are trying to start. if (-d $par_temp && $^O ne 'MSWin32') { # Something went wrong unlinking the temporary directory. This # typically happens on platforms that disallow unlinking shared # libraries and executables that are in use. Unlink with a background # shell command so the files are no longer in use by this process. # Don't do anything on Windows because our parent process will # take care of cleaning things up. my $tmp = new File::Temp( TEMPLATE => 'tmpXXXXX', DIR => File::Basename::dirname($topdir), SUFFIX => '.cmd', UNLINK => 0, ); my $filename = $tmp->filename; print $tmp <<"..."; #!/bin/sh x=1; while [ \$x -lt 10 ]; do rm -rf '$par_temp' if [ \! -d '$par_temp' ]; then break fi sleep 1 x=`expr \$x + 1` done rm '$filename' ... close $tmp; chmod 0700, $filename; my $cmd = "$filename >/dev/null 2>&1 &"; system($cmd); outs(qq[Spawned background process to perform cleanup: $filename]); } } } BEGIN { Internals::PAR::BOOT() if defined &Internals::PAR::BOOT; $PAR_MAGIC = "\nPAR.pm\n"; eval { _par_init_env(); my $quiet = !$ENV{PAR_DEBUG}; # fix $progname if invoked from PATH my %Config = ( path_sep => ($^O =~ /^MSWin/ ? ';' : ':'), _exe => ($^O =~ /^(?:MSWin|OS2|cygwin)/ ? '.exe' : ''), _delim => ($^O =~ /^MSWin|OS2/ ? '\\' : '/'), ); _set_progname(); _set_par_temp(); # Magic string checking and extracting bundled modules {{{ my ($start_pos, $data_pos); { local $SIG{__WARN__} = sub {}; # Check file type, get start of data section {{{ open _FH, '<:raw', $progname or last; # Search for the "\nPAR.pm\n signature backward from the end of the file my $buf; my $size = -s $progname; my $chunk_size = 64 * 1024; my $magic_pos; if ($size <= $chunk_size) { $magic_pos = 0; } elsif ((my $m = $size % $chunk_size) > 0) { $magic_pos = $size - $m; } else { $magic_pos = $size - $chunk_size; } # in any case, $magic_pos is a multiple of $chunk_size while ($magic_pos >= 0) { seek _FH, $magic_pos, 0; read _FH, $buf, $chunk_size + length($PAR_MAGIC); if ((my $i = rindex($buf, $PAR_MAGIC)) >= 0) { $magic_pos += $i; last; } $magic_pos -= $chunk_size; } last if $magic_pos < 0; # Seek 4 bytes backward from the signature to get the offset of the # first embedded FILE, then seek to it seek _FH, $magic_pos - 4, 0; read _FH, $buf, 4; seek _FH, $magic_pos - 4 - unpack("N", $buf), 0; $data_pos = tell _FH; # }}} # Extracting each file into memory {{{ my %require_list; read _FH, $buf, 4; # read the first "FILE" while ($buf eq "FILE") { read _FH, $buf, 4; read _FH, $buf, unpack("N", $buf); my $fullname = $buf; outs(qq[Unpacking FILE "$fullname"...]); my $crc = ( $fullname =~ s|^([a-f\d]{8})/|| ) ? $1 : undef; my ($basename, $ext) = ($buf =~ m|(?:.*/)?(.*)(\..*)|); read _FH, $buf, 4; read _FH, $buf, unpack("N", $buf); if (defined($ext) and $ext !~ /\.(?:pm|pl|ix|al)$/i) { my $filename = _save_as("$crc$ext", $buf, 0755); $PAR::Heavy::FullCache{$fullname} = $filename; $PAR::Heavy::FullCache{$filename} = $fullname; } elsif ( $fullname =~ m|^/?shlib/| and defined $ENV{PAR_TEMP} ) { my $filename = _save_as("$basename$ext", $buf, 0755); outs("SHLIB: $filename\n"); } else { $require_list{$fullname} = $ModuleCache{$fullname} = { buf => $buf, crc => $crc, name => $fullname, }; } read _FH, $buf, 4; } # }}} local @INC = (sub { my ($self, $module) = @_; return if ref $module or !$module; my $info = delete $require_list{$module} or return; $INC{$module} = "/loader/$info/$module"; if ($ENV{PAR_CLEAN} and defined(&IO::File::new)) { my $fh = IO::File->new_tmpfile or die "Can't create temp file: $!"; $fh->binmode(); $fh->print($info->{buf}); $fh->seek(0, 0); return $fh; } else { my $filename = _save_as("$info->{crc}.pm", $info->{buf}); open my $fh, '<:raw', $filename or die qq[Can't read "$filename": $!]; return $fh; } die "Bootstrapping failed: can't find module $module!"; }, @INC); # Now load all bundled files {{{ # initialize shared object processing require XSLoader; require PAR::Heavy; require Carp::Heavy; require Exporter::Heavy; PAR::Heavy::_init_dynaloader(); # now let's try getting helper modules from within require IO::File; # load rest of the group in while (my $filename = (sort keys %require_list)[0]) { #local $INC{'Cwd.pm'} = __FILE__ if $^O ne 'MSWin32'; unless ($INC{$filename} or $filename =~ /BSDPAN/) { # require modules, do other executable files if ($filename =~ /\.pmc?$/i) { require $filename; } else { # Skip ActiveState's sitecustomize.pl file: do $filename unless $filename =~ /sitecustomize\.pl$/; } } delete $require_list{$filename}; } # }}} last unless $buf eq "PK\003\004"; $start_pos = (tell _FH) - 4; # start of zip } # }}} # Argument processing {{{ my @par_args; my ($out, $bundle, $logfh, $cache_name); delete $ENV{PAR_APP_REUSE}; # sanitize (REUSE may be a security problem) $quiet = 0 unless $ENV{PAR_DEBUG}; # Don't swallow arguments for compiled executables without --par-options if (!$start_pos or ($ARGV[0] eq '--par-options' && shift)) { my %dist_cmd = qw( p blib_to_par i install_par u uninstall_par s sign_par v verify_par ); # if the app is invoked as "appname --par-options --reuse PROGRAM @PROG_ARGV", # use the app to run the given perl code instead of anything from the # app itself (but still set up the normal app environment and @INC) if (@ARGV and $ARGV[0] eq '--reuse') { shift @ARGV; $ENV{PAR_APP_REUSE} = shift @ARGV; } else { # normal parl behaviour my @add_to_inc; while (@ARGV) { $ARGV[0] =~ /^-([AIMOBLbqpiusTv])(.*)/ or last; if ($1 eq 'I') { push @add_to_inc, $2; } elsif ($1 eq 'M') { eval "use $2"; } elsif ($1 eq 'A') { unshift @par_args, $2; } elsif ($1 eq 'O') { $out = $2; } elsif ($1 eq 'b') { $bundle = 'site'; } elsif ($1 eq 'B') { $bundle = 'all'; } elsif ($1 eq 'q') { $quiet = 1; } elsif ($1 eq 'L') { open $logfh, ">>", $2 or die qq[Can't open log file "$2": $!]; } elsif ($1 eq 'T') { $cache_name = $2; } shift(@ARGV); if (my $cmd = $dist_cmd{$1}) { delete $ENV{'PAR_TEMP'}; init_inc(); require PAR::Dist; &{"PAR::Dist::$cmd"}() unless @ARGV; &{"PAR::Dist::$cmd"}($_) for @ARGV; exit; } } unshift @INC, @add_to_inc; } } # XXX -- add --par-debug support! # }}} # Output mode (-O) handling {{{ if ($out) { { #local $INC{'Cwd.pm'} = __FILE__ if $^O ne 'MSWin32'; require IO::File; require Archive::Zip; require Digest::SHA; } my $par = shift(@ARGV); my $zip; if (defined $par) { open my $fh, '<:raw', $par or die qq[Can't find par file "$par": $!]; bless($fh, 'IO::File'); $zip = Archive::Zip->new; ( $zip->readFromFileHandle($fh, $par) == Archive::Zip::AZ_OK() ) or die qq[Error reading zip archive "$par"]; } my %env = do { if ($zip and my $meta = $zip->contents('META.yml')) { $meta =~ s/.*^par:$//ms; $meta =~ s/^\S.*//ms; $meta =~ /^ ([^:]+): (.+)$/mg; } }; # Open input and output files {{{ if (defined $par) { open my $ph, '<:raw', $par or die qq[Can't read par file "$par": $!]; my $buf; read $ph, $buf, 4; die qq["$par" is not a par file] unless $buf eq "PK\003\004"; close $ph; } CreatePath($out) ; my $fh = IO::File->new( $out, IO::File::O_CREAT() | IO::File::O_WRONLY() | IO::File::O_TRUNC(), 0777, ) or die qq[Can't create file "$out": $!]; $fh->binmode(); seek _FH, 0, 0; my $loader; if (defined $data_pos) { read _FH, $loader, $data_pos; } else { local $/ = undef; $loader = <_FH>; } if (!$ENV{PAR_VERBATIM} and $loader =~ /^(?:#!|\@rem)/) { require PAR::Filter::PodStrip; PAR::Filter::PodStrip->apply(\$loader, $0); } foreach my $key (sort keys %env) { my $val = $env{$key} or next; $val = eval $val if $val =~ /^['"]/; my $magic = "__ENV_PAR_" . uc($key) . "__"; my $set = "PAR_" . uc($key) . "=$val"; $loader =~ s{$magic( +)}{ $magic . $set . (' ' x (length($1) - length($set))) }eg; } $fh->print($loader); # }}} # Write bundled modules {{{ if ($bundle) { require PAR::Heavy; PAR::Heavy::_init_dynaloader(); init_inc(); require_modules(); my @inc = grep { !/BSDPAN/ } grep { ($bundle ne 'site') or ($_ ne $Config::Config{archlibexp} and $_ ne $Config::Config{privlibexp}); } @INC; # normalize paths (remove trailing or multiple consecutive slashes) s|/+|/|g, s|/$|| foreach @inc; # Now determine the files loaded above by require_modules(): # Perl source files are found in values %INC and DLLs are # found in @DynaLoader::dl_shared_objects. my %files; $files{$_}++ for @DynaLoader::dl_shared_objects, values %INC; my $lib_ext = $Config::Config{lib_ext}; # XXX lib_ext vs dlext ? my %written; foreach my $key (sort keys %files) { my ($file, $name); if (defined(my $fc = $PAR::Heavy::FullCache{$key})) { ($file, $name) = ($key, $fc); } else { foreach my $dir (@inc) { if ($key =~ m|^\Q$dir\E/(.*)$|i) { ($file, $name) = ($key, $1); last; } if ($key =~ m|^/loader/[^/]+/(.*)$|) { if (my $ref = $ModuleCache{$1}) { ($file, $name) = ($ref, $1); last; } if (-f "$dir/$1") { ($file, $name) = ("$dir/$1", $1); last; } } } } # There are legitimate reasons why we couldn't find $name and $file for a $key: # - cperl has e.g. $INC{"XSLoader.pm"} = "XSLoader.c", # $INC{"DynaLoader.pm"}' = "dlboot_c.PL" next unless defined $name; next if $written{$name}++; next if !ref($file) and $file =~ /\.\Q$lib_ext\E$/i; outs(sprintf(qq[Packing FILE "%s"...], ref $file ? $file->{name} : $file)); my $content; if (ref($file)) { $content = $file->{buf}; } else { local $/ = undef; open my $fh, '<:raw', $file or die qq[Can't read "$file": $!]; $content = <$fh>; close $fh; PAR::Filter::PodStrip->apply(\$content, "<embedded>/$name") if !$ENV{PAR_VERBATIM} and $name =~ /\.(?:pm|ix|al)$/i; PAR::Filter::PatchContent->new->apply(\$content, $file, $name); } $fh->print("FILE", pack('N', length($name) + 9), sprintf("%08x/%s", Archive::Zip::computeCRC32($content), $name), pack('N', length($content)), $content); outs(qq[Written as "$name"]); } } # }}} # Now write out the PAR and magic strings {{{ $zip->writeToFileHandle($fh) if $zip; $cache_name = substr $cache_name, 0, 40; if (!$cache_name and my $mtime = (stat($out))[9]) { my $ctx = Digest::SHA->new(1); open my $fh, "<:raw", $out; $ctx->addfile($fh); close $fh; $cache_name = $ctx->hexdigest; } $cache_name .= "\0" x (41 - length $cache_name); $cache_name .= "CACHE"; $fh->print($cache_name); $fh->print(pack('N', $fh->tell - length($loader))); $fh->print($PAR_MAGIC); $fh->close; chmod 0755, $out; # }}} exit; } # }}} # Prepare $progname into PAR file cache {{{ { last unless defined $start_pos; _fix_progname(); # Now load the PAR file and put it into PAR::LibCache {{{ require PAR; PAR::Heavy::_init_dynaloader(); { #local $INC{'Cwd.pm'} = __FILE__ if $^O ne 'MSWin32'; require File::Find; require Archive::Zip; } my $fh = IO::File->new; # Archive::Zip operates on an IO::Handle $fh->fdopen(fileno(_FH), 'r') or die qq[fdopen() failed: $!]; # Temporarily increase the chunk size for Archive::Zip so that it will find the EOCD # even if lots of stuff has been appended to the pp'ed exe (e.g. by OSX codesign). Archive::Zip::setChunkSize(-s _FH); my $zip = Archive::Zip->new; ( $zip->readFromFileHandle($fh, $progname) == Archive::Zip::AZ_OK() ) or die qq[Error reading zip archive "$progname"]; Archive::Zip::setChunkSize(64 * 1024); push @PAR::LibCache, $zip; $PAR::LibCache{$progname} = $zip; $quiet = !$ENV{PAR_DEBUG}; outs(qq[\$ENV{PAR_TEMP} = "$ENV{PAR_TEMP}"]); if (defined $ENV{PAR_TEMP}) { # should be set at this point! foreach my $member ( $zip->members ) { next if $member->isDirectory; my $member_name = $member->fileName; next unless $member_name =~ m{ ^ /?shlib/ (?:$Config::Config{version}/)? (?:$Config::Config{archname}/)? ([^/]+) $ }x; my $extract_name = $1; my $dest_name = File::Spec->catfile($ENV{PAR_TEMP}, $extract_name); if (-f $dest_name && -s _ == $member->uncompressedSize()) { outs(qq[Skipping "$member_name" since it already exists at "$dest_name"]); } else { outs(qq[Extracting "$member_name" to "$dest_name"]); $member->extractToFileNamed($dest_name); chmod(0555, $dest_name) if $^O eq "hpux"; } } } # }}} } # }}} # If there's no main.pl to run, show usage {{{ unless ($PAR::LibCache{$progname}) { die << "." unless @ARGV; Usage: $0 [ -Alib.par ] [ -Idir ] [ -Mmodule ] [ src.par ] [ program.pl ] $0 [ -B|-b ] [-Ooutfile] src.par . $ENV{PAR_PROGNAME} = $progname = $0 = shift(@ARGV); } # }}} sub CreatePath { my ($name) = @_; require File::Basename; my ($basename, $path, $ext) = File::Basename::fileparse($name, ('\..*')); require File::Path; File::Path::mkpath($path) unless(-e $path); # mkpath dies with error } sub require_modules { require lib; require DynaLoader; require integer; require strict; require warnings; require vars; require Carp; require Carp::Heavy; require Errno; require Exporter::Heavy; require Exporter; require Fcntl; require File::Temp; require File::Spec; require XSLoader; require Config; require IO::Handle; require IO::File; require Compress::Zlib; require Archive::Zip; require Digest::SHA; require PAR; require PAR::Heavy; require PAR::Dist; require PAR::Filter::PodStrip; require PAR::Filter::PatchContent; require attributes; eval { require Cwd }; eval { require Win32 }; eval { require Scalar::Util }; eval { require Archive::Unzip::Burst }; eval { require Tie::Hash::NamedCapture }; eval { require PerlIO; require PerlIO::scalar }; eval { require utf8 }; } # The C version of this code appears in myldr/mktmpdir.c # This code also lives in PAR::SetupTemp as set_par_temp_env! sub _set_par_temp { if (defined $ENV{PAR_TEMP} and $ENV{PAR_TEMP} =~ /(.+)/) { $par_temp = $1; return; } foreach my $path ( (map $ENV{$_}, qw( PAR_TMPDIR TMPDIR TEMPDIR TEMP TMP )), qw( C:\\TEMP /tmp . ) ) { next unless defined $path and -d $path and -w $path; my $username; my $pwuid; # does not work everywhere: eval {($pwuid) = getpwuid($>) if defined $>;}; if ( defined(&Win32::LoginName) ) { $username = &Win32::LoginName; } elsif (defined $pwuid) { $username = $pwuid; } else { $username = $ENV{USERNAME} || $ENV{USER} || 'SYSTEM'; } $username =~ s/\W/_/g; my $stmpdir = "$path$Config{_delim}par-".unpack("H*", $username); mkdir $stmpdir, 0755; if (!$ENV{PAR_CLEAN} and my $mtime = (stat($progname))[9]) { open my $fh, "<:raw", $progname or die qq[Can't read "$progname": $!]; seek $fh, -18, 2; my $buf; read $fh, $buf, 6; if ($buf eq "\0CACHE") { seek $fh, -58, 2; read $fh, $buf, 41; $buf =~ s/\0//g; $stmpdir .= "$Config{_delim}cache-$buf"; } else { my $digest = eval { require Digest::SHA; my $ctx = Digest::SHA->new(1); open my $fh, "<:raw", $progname or die qq[Can't read "$progname": $!]; $ctx->addfile($fh); close($fh); $ctx->hexdigest; } || $mtime; $stmpdir .= "$Config{_delim}cache-$digest"; } close($fh); } else { $ENV{PAR_CLEAN} = 1; $stmpdir .= "$Config{_delim}temp-$$"; } $ENV{PAR_TEMP} = $stmpdir; mkdir $stmpdir, 0755; last; } $par_temp = $1 if $ENV{PAR_TEMP} and $ENV{PAR_TEMP} =~ /(.+)/; } # check if $name (relative to $par_temp) already exists; # if not, create a file with a unique temporary name, # fill it with $contents, set its file mode to $mode if present; # finaly rename it to $name; # in any case return the absolute filename sub _save_as { my ($name, $contents, $mode) = @_; my $fullname = "$par_temp/$name"; unless (-e $fullname) { my $tempname = "$fullname.$$"; open my $fh, '>:raw', $tempname or die qq[Can't write "$tempname": $!]; print $fh $contents; close $fh; chmod $mode, $tempname if defined $mode; rename($tempname, $fullname) or unlink($tempname); # NOTE: The rename() error presumably is something like ETXTBSY # (scenario: another process was faster at extraction $fullname # than us and is already using it in some way); anyway, # let's assume $fullname is "good" and clean up our copy. } return $fullname; } # same code lives in PAR::SetupProgname::set_progname sub _set_progname { if (defined $ENV{PAR_PROGNAME} and $ENV{PAR_PROGNAME} =~ /(.+)/) { $progname = $1; } $progname ||= $0; if ($ENV{PAR_TEMP} and index($progname, $ENV{PAR_TEMP}) >= 0) { $progname = substr($progname, rindex($progname, $Config{_delim}) + 1); } if (!$ENV{PAR_PROGNAME} or index($progname, $Config{_delim}) >= 0) { if (open my $fh, '<', $progname) { return if -s $fh; } if (-s "$progname$Config{_exe}") { $progname .= $Config{_exe}; return; } } foreach my $dir (split /\Q$Config{path_sep}\E/, $ENV{PATH}) { next if exists $ENV{PAR_TEMP} and $dir eq $ENV{PAR_TEMP}; $dir =~ s/\Q$Config{_delim}\E$//; (($progname = "$dir$Config{_delim}$progname$Config{_exe}"), last) if -s "$dir$Config{_delim}$progname$Config{_exe}"; (($progname = "$dir$Config{_delim}$progname"), last) if -s "$dir$Config{_delim}$progname"; } } sub _fix_progname { $0 = $progname ||= $ENV{PAR_PROGNAME}; if (index($progname, $Config{_delim}) < 0) { $progname = ".$Config{_delim}$progname"; } # XXX - hack to make PWD work my $pwd = (defined &Cwd::getcwd) ? Cwd::getcwd() : ((defined &Win32::GetCwd) ? Win32::GetCwd() : `pwd`); chomp($pwd); $progname =~ s/^(?=\.\.?\Q$Config{_delim}\E)/$pwd$Config{_delim}/; $ENV{PAR_PROGNAME} = $progname; } sub _par_init_env { if ( $ENV{PAR_INITIALIZED}++ == 1 ) { return; } else { $ENV{PAR_INITIALIZED} = 2; } for (qw( SPAWNED TEMP CLEAN DEBUG CACHE PROGNAME ) ) { delete $ENV{'PAR_'.$_}; } for (qw/ TMPDIR TEMP CLEAN DEBUG /) { $ENV{'PAR_'.$_} = $ENV{'PAR_GLOBAL_'.$_} if exists $ENV{'PAR_GLOBAL_'.$_}; } my $par_clean = "__ENV_PAR_CLEAN__ "; if ($ENV{PAR_TEMP}) { delete $ENV{PAR_CLEAN}; } elsif (!exists $ENV{PAR_GLOBAL_CLEAN}) { my $value = substr($par_clean, 12 + length("CLEAN")); $ENV{PAR_CLEAN} = $1 if $value =~ /^PAR_CLEAN=(\S+)/; } } sub outs { return if $quiet; if ($logfh) { print $logfh "@_\n"; } else { print "@_\n"; } } sub init_inc { require Config; push @INC, grep defined, map $Config::Config{$_}, qw( archlibexp privlibexp sitearchexp sitelibexp vendorarchexp vendorlibexp ); } ######################################################################## # The main package for script execution package main; require PAR; unshift @INC, \&PAR::find_par; PAR->import(@par_args); die qq[par.pl: Can't open perl script "$progname": No such file or directory\n] unless -e $progname; do $progname; CORE::exit($1) if ($@ =~/^_TK_EXIT_\((\d+)\)/); die $@ if $@; }; $::__ERROR = $@ if $@; } CORE::exit($1) if ($::__ERROR =~/^_TK_EXIT_\((\d+)\)/); die $::__ERROR if $::__ERROR; 1; #line 1006 __END__ PK代码解析上述代码有读取时间的代码吗?
07-12
import os import psutil import platform import json import sqlite3 import logging import time from datetime import datetime from pathlib import Path logger = logging.getLogger('EnvironmentInterface') class EnvironmentInterface: def __init__(self, env_manager): self.env_manager = env_manager def request_vip_access(self, agent_id, resource_type): """请求VIP通道访问权限""" return self.env_manager.vip_system.request_access( agent_id=agent_id, resource=resource_type ) def release_vip_resource(self, agent_id, resource_type): """释放VIP资源""" self.env_manager.vip_system.release_resource( agent_id=agent_id, resource=resource_type ) def get_vip_status(self): """获取VIP通道状态""" return self.env_manager.vip_system.get_system_status() class EnvironmentInterface: def __init__(self, base_dir: str, coordinator=None): """初始化环境接口 Args: base_dir: 系统基础目录 coordinator: 意识系统协调器(可选) """ self.coordinator = coordinator # 配置日志 self.logger = logging.getLogger('EnvironmentInterface') self.logger.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') # 控制台日志 console_handler = logging.StreamHandler() console_handler.setFormatter(formatter) self.logger.addHandler(console_handler) self.logger.propagate = False # 工作区路径设置 self.workspace_root = self._resolve_workspace_path(base_dir) # 创建标准目录结构 self.models_dir = self._resolve_and_create_dir("01_模型存储") self.cache_dir = self._resolve_and_create_dir("01_模型存储/下载缓存") self.system_dir = self._resolve_and_create_dir("02_核心代码") self.temp_dir = self._resolve_and_create_dir("04_环境工具/临时补丁") self.python_dir = self._resolve_and_create_dir("04_环境工具/Python环境") # 环境配置 - 跨平台兼容 path_sep = os.pathsep os.environ['PATH'] = f"{self.python_dir}{path_sep}{os.environ['PATH']}" if platform.system() == "Windows": os.environ['PATH'] += f"{path_sep}{self.python_dir}\\Scripts" os.environ['HF_HOME'] = self.cache_dir # 安全策略 self.authorized_actions = { "file_access": True, "web_search": True, "command_exec": True, "software_install": False, "hardware_control": False } self.action_log = [] # 初始化数据库 self.environment_db = os.path.join(self.system_dir, 'environment.db') self._init_db() self.logger.info("✅ 环境接口初始化完成") def _resolve_workspace_path(self, base_dir: str) -> str: """解析工作区路径为绝对路径""" try: base_path = Path(base_dir).resolve() workspace_path = base_path / "AI_Workspace" if not workspace_path.exists(): workspace_path.mkdir(parents=True, exist_ok=True) self.logger.info(f"创建工作区目录: {workspace_path}") return str(workspace_path) except Exception as e: self.logger.error(f"工作区路径解析失败: {str(e)}") # 跨平台回退路径 fallback_path = Path.home() / "AI_Workspace" fallback_path.mkdir(parents=True, exist_ok=True) return str(fallback_path) def _resolve_and_create_dir(self, relative_path: str) -> str: """解析路径并确保目录存在""" try: full_path = Path(self.workspace_root) / relative_path full_path.mkdir(parents=True, exist_ok=True) self.logger.info(f"创建/确认目录: {full_path}") return str(full_path) except Exception as e: self.logger.error(f"目录解析失败: {relative_path} - {str(e)}") # 创建临时目录作为回退 temp_path = Path(self.workspace_root) / "temp" temp_path.mkdir(parents=True, exist_ok=True) return str(temp_path) def _init_db(self): """初始化环境数据库""" max_retries = 3 for attempt in range(max_retries): try: with sqlite3.connect(self.environment_db) as conn: cursor = conn.cursor() # 系统信息表 cursor.execute('''CREATE TABLE IF NOT EXISTS system_info ( id INTEGER PRIMARY KEY, timestamp DATETIME DEFAULT CURRENT_TIMESTAMP, os TEXT, cpu TEXT, memory REAL, disk_usage REAL )''') # 文件探索历史表 cursor.execute('''CREATE TABLE IF NOT EXISTS file_exploration ( id INTEGER PRIMARY KEY, path TEXT UNIQUE, last_visited DATETIME, visit_count INTEGER DEFAULT 0 )''') # 资源管理表 cursor.execute('''CREATE TABLE IF NOT EXISTS resources ( id INTEGER PRIMARY KEY, name TEXT UNIQUE, type TEXT CHECK(type IN ('skin', 'furniture', 'tool')), path TEXT, is_active BOOLEAN DEFAULT 0 )''') # 添加索引 cursor.execute("CREATE INDEX IF NOT EXISTS idx_resources_type ON resources(type)") cursor.execute("CREATE INDEX IF NOT EXISTS idx_file_exploration_path ON file_exploration(path)") conn.commit() self.logger.info(f"✅ 数据库初始化完成: {self.environment_db}") return except (sqlite3.OperationalError, sqlite3.DatabaseError) as e: self.logger.error(f"❌ 数据库初始化失败 (尝试 {attempt + 1}/{max_retries}): {str(e)}") time.sleep(1) # 等待后重试 self.logger.error("❌ 数据库初始化最终失败,环境功能可能受限") # 系统监控功能 def get_system_info(self) -> dict: """获取并记录系统信息""" try: mem = psutil.virtual_memory() mem_used = round(mem.used / (1024 ** 3), 1) mem_total = round(mem.total / (1024 ** 3), 1) disk_usage = psutil.disk_usage('/').percent info = { "os": f"{platform.system()} {platform.release()}", "cpu": f"{platform.processor()} ({psutil.cpu_count(logical=False)} cores)", "memory": f"{mem_used}GB/{mem_total}GB ({mem.percent}%)", "disk_usage": f"{disk_usage}%" } # 保存到数据库 with sqlite3.connect(self.environment_db) as conn: cursor = conn.cursor() cursor.execute('''INSERT INTO system_info (os, cpu, memory, disk_usage) VALUES (?, ?, ?, ?)''', (info['os'], info['cpu'], mem_used, disk_usage)) conn.commit() self.log_action("system_monitor", "采集系统信息") return info except Exception as e: self.logger.error(f"❌ 获取系统信息失败: {str(e)}", exc_info=True) return { "error": "获取系统信息失败", "details": str(e) } # 文件探索功能 def explore_directory(self, path: str = None) -> dict: """探索目录内容""" try: target_path = Path(path) if path else Path(self.workspace_root) target_path = target_path.resolve() # 安全路径检查 workspace_path = Path(self.workspace_root).resolve() if not target_path.is_relative_to(workspace_path): return {"error": "访问路径超出工作区范围"} if not target_path.exists(): return {"error": "路径不存在"} # 记录探索历史 self._record_exploration(str(target_path)) contents = [] for item in target_path.iterdir(): try: item_info = { "name": item.name, "path": str(item), "type": "directory" if item.is_dir() else "file", "modified": datetime.fromtimestamp(item.stat().st_mtime).strftime("%Y-%m-%d %H:%M") } if item.is_file(): size = item.stat().st_size item_info["size"] = self._format_size(size) contents.append(item_info) except PermissionError: contents.append({ "name": item.name, "error": "权限不足" }) except Exception as e: contents.append({ "name": item.name, "error": str(e) }) # 排序:目录在前,按名称排序 contents.sort(key=lambda x: (0 if x.get('type') == 'directory' else 1, x['name'])) self.log_action("file_explore", f"探索路径: {target_path}") return { "current_path": str(target_path), "contents": contents } except Exception as e: self.logger.error(f"❌ 探索目录失败: {str(e)}", exc_info=True) return {"error": str(e)} def _format_size(self, size_bytes: int) -> str: """格式化文件大小""" if size_bytes < 1024: return f"{size_bytes} B" elif size_bytes < 1024 ** 2: return f"{size_bytes / 1024:.1f} KB" elif size_bytes < 1024 ** 3: return f"{size_bytes / (1024 ** 2):.1f} MB" else: return f"{size_bytes / (1024 ** 3):.1f} GB" def _record_exploration(self, path: str): """记录探索历史到数据库""" try: with sqlite3.connect(self.environment_db) as conn: cursor = conn.cursor() cursor.execute("SELECT 1 FROM file_exploration WHERE path = ?", (path,)) if cursor.fetchone(): cursor.execute('''UPDATE file_exploration SET last_visited = CURRENT_TIMESTAMP, visit_count = visit_count + 1 WHERE path = ?''', (path,)) else: cursor.execute('''INSERT INTO file_exploration (path, last_visited, visit_count) VALUES (?, CURRENT_TIMESTAMP, 1)''', (path,)) conn.commit() except Exception as e: self.logger.error(f"❌ 记录探索历史失败: {str(e)}") # 资源管理功能 def get_resources(self, resource_type: str = None) -> list: """获取资源列表(可过滤类型)""" try: with sqlite3.connect(self.environment_db) as conn: cursor = conn.cursor() if resource_type: cursor.execute('''SELECT id, name, type, path, is_active FROM resources WHERE type = ?''', (resource_type,)) else: cursor.execute('''SELECT id, name, type, path, is_active FROM resources''') return [{ "id": row[0], "name": row[1], "type": row[2], "path": row[3], "is_active": bool(row[4]) } for row in cursor.fetchall()] except Exception as e: self.logger.error(f"❌ 获取资源失败: {str(e)}") return [] def activate_resource(self, resource_id: int) -> bool: """激活特定资源""" try: with sqlite3.connect(self.environment_db) as conn: cursor = conn.cursor() # 获取资源信息 cursor.execute('''SELECT type FROM resources WHERE id = ?''', (resource_id,)) resource = cursor.fetchone() if not resource: self.logger.warning(f"资源ID不存在: {resource_id}") return False resource_type = resource[0] # 禁用同类型所有资源 cursor.execute('''UPDATE resources SET is_active = 0 WHERE type = ?''', (resource_type,)) # 激活指定资源 cursor.execute('''UPDATE resources SET is_active = 1 WHERE id = ?''', (resource_id,)) conn.commit() # 记录日志 cursor.execute('''SELECT name FROM resources WHERE id = ?''', (resource_id,)) resource_name = cursor.fetchone()[0] self.log_action("resource_activate", f"激活资源: {resource_name} ({resource_type})") # 通知协调器 if self.coordinator: self.coordinator.notify_resource_change(resource_type, resource_name) return True except Exception as e: self.logger.error(f"❌ 激活资源失败: {str(e)}") return False # 工作区管理功能 def get_workspace_info(self) -> dict: """获取工作区信息""" return { "workspace_root": self.workspace_root, "models_dir": self.models_dir, "cache_dir": self.cache_dir, "system_dir": self.system_dir, "temp_dir": self.temp_dir, "python_dir": self.python_dir } # 辅助功能 def is_authorized(self, action: str) -> bool: """检查操作授权状态""" return self.authorized_actions.get(action, False) def log_action(self, action: str, details: str) -> bool: """记录环境操作日志""" log_entry = { "timestamp": datetime.now().isoformat(), "action": action, "details": details } self.action_log.append(log_entry) self.logger.info(f"{action}: {details}") return True def process_environment_change(self, change: dict): """处理环境变化(如果协调器存在)""" if self.coordinator and hasattr(self.coordinator, 'process_stimulus'): stimulus = self._create_stimulus_from_change(change) self.coordinator.process_stimulus(stimulus) def _create_stimulus_from_change(self, change: dict) -> dict: """将环境变化转化为刺激""" return { "type": "environment_change", "source": "environment_interface", "timestamp": datetime.now().isoformat(), "data": change } # 使用示例 if __name__ == "__main__": # 配置日志 logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # 创建环境接口实例 current_dir = os.path.dirname(os.path.abspath(__file__)) base_dir = os.path.dirname(os.path.dirname(current_dir)) env = EnvironmentInterface(base_dir) # 获取工作区信息 print("工作区信息:") print(json.dumps(env.get_workspace_info(), indent=2, ensure_ascii=False)) # 获取系统信息 print("\n系统信息:") print(json.dumps(env.get_system_info(), indent=2, ensure_ascii=False)) # 探索目录 print("\n工作区内容:") print(json.dumps(env.explore_directory(), indent=2, ensure_ascii=False))
08-11
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值