OPTIMIZER_MODE Initialization Parameter

本文介绍了数据库中SQL执行计划的不同优化策略,包括ALL_ROWS、FIRST_ROWS_n及FIRST_ROWS等,详细解释了每种策略如何影响SQL语句的执行效率及响应时间。特别地,文章对比了基于成本和启发式的优化方法,并提供了在统计信息缺失情况下的优化方案。

ALL_ROWS :The optimizer uses a cost-based approach for all SQL statements
in the session regardless of the presence of statistics and
optimizes with a goal of best throughput (minimum resource
use to complete the entire statement). This is the default value.

FIRST_ROWS_n :The optimizer uses a cost-based approach, regardless of the
presence of statistics, and optimizes with a goal of best response
time to return the first n number of rows; n can equal 1, 10, 100,
or 1000.

FIRST_ROWS :The optimizer uses a mix of cost and heuristics to find a best
plan for fast delivery of the first few rows.

Note: Using heuristics sometimes leads the query optimizer to
generate a plan with a cost that is significantly larger than the
cost of a plan without applying the heuristic. FIRST_ROWS is
available for backward compatibility and plan stability; use
FIRST_ROWS_n instead.

If the optimizer uses the cost-based approach for a SQL statement, and if some
tables accessed by the statement have no statistics, then the optimizer uses
internal information, such as the number of data blocks allocated to these
tables, to estimate other statistics for these tables.

[@more@]

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/10599713/viewspace-1001789/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/10599713/viewspace-1001789/

/* * This file is part of the openHiTLS project. * * openHiTLS is licensed under the Mulan PSL v2. * You can use this software according to the terms and conditions of the Mulan PSL v2. * You may obtain a copy of Mulan PSL v2 at: * * http://license.coscl.org.cn/MulanPSL2 * * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. * See the Mulan PSL v2 for more details. */ /** * @defgroup crypt_errno * @ingroup crypt * @brief error number module of crypto module */ #ifndef CRYPT_ERRNO_H #define CRYPT_ERRNO_H #ifdef __cplusplus extern "C" { #endif /** * @ingroup crypt_errno * @brief Return success */ #define CRYPT_SUCCESS 0 /** * @ingroup crypt_errno * * CRYPTO module return value. */ enum CRYPT_ERROR { CRYPT_NULL_INPUT = 0x01010001, /**< Null pointer input error, bufferLen is 0. */ CRYPT_SECUREC_FAIL, /**< Security function returns an error. */ CRYPT_MEM_ALLOC_FAIL, /**< Failed to apply for memory. */ CRYPT_NO_REGIST_RAND, /**< The global random number is not registered.*/ CRYPT_ERR_ALGID, /**< Incorrect algorithm ID. */ CRYPT_INVALID_ARG, /**< Invalid input parameter. */ CRYPT_NOT_SUPPORT, /**< unsupported operation. */ CRYPT_INCONSISTENT_OPERATION, /**< Inconsistent operation. */ CRYPT_INVALID_KEY, /**< invalid key. */ CRYPT_PAIRWISE_CHECK_FAIL, /**< key-pair check failed. */ CRYPT_BN_BUFF_LEN_NOT_ENOUGH = 0x01020001, /**< Insufficient buffer length. */ CRYPT_BN_SPACE_NOT_ENOUGH, /**< Insufficient big number space. */ CRYPT_BN_BITS_TOO_MAX, /**< The maximum bit limit is exceeded of the big number. */ CRYPT_BN_RAND_GEN_FAIL, /**< Failed to generate the random number. */ CRYPT_BN_OPTIMIZER_STACK_FULL, /**< Optimizer stack is full. */ CRYPT_BN_NO_NEGATIVE_ZERO, /**< The big number is set to a positive number only. */ CRYPT_BN_ERR_RAND_ZERO, /**< Generates a random number smaller than 0. */ CRYPT_BN_ERR_RAND_NEGATIVE, /**< Generate a negative random number. */ CRYPT_BN_ERR_RAND_TOP_BOTTOM, /**< The top or bottom is invalid during random number generation. */ CRYPT_BN_ERR_RAND_BITS_NOT_ENOUGH, /**< The bit is too small during random number generation. */ CRYPT_BN_OPTIMIZER_GET_FAIL, /**< Failed to obtain the space from the optimizer. */ CRYPT_BN_ERR_DIVISOR_ZERO, /**< The divisor cannot be 0. */ CRYPT_BN_ERR_EXP_NO_NEGATIVE, /**< The value of exponent cannot be negative. */ CRYPT_BN_MONT_BASE_TOO_MAX, /**< Montgomery module exponentiation base is too large. */ CRYPT_BN_NOR_GEN_PRIME, /**< Prime Number Generation Failure. */ CRYPT_BN_NOR_CHECK_PRIME, /**< prime number check failed. */ CRYPT_BN_ERR_GCD_NO_ZERO, /**< The maximum common divisor cannot contain 0. */ CRYPT_BN_ERR_NO_INVERSE, /**< Cannot obtain the inverse module. */ CRYPT_BN_ERR_SQRT_PARA, /**< The parameter is incorrect when modulus square root. */ CRYPT_BN_ERR_LEGENDE_DATA, /**< Failed to find a specific number for z to p's Legendre sign (z|p) equal to -1 when calculating the square root. */ CRYPT_BN_ERR_NO_SQUARE_ROOT, /**< The square root cannot be found. */ CRYPT_BN_ERR_MASKCOPY_LEN, /**< Data lengths are inconsistent when data is copied with masks. */ CRYPT_BN_ERR_QUICK_MODDATA, /**< Uses the BN_ModNistEccMul and BN_ModNistEccSqr interfaces, the module data is not supported. */ CRYPT_BN_FLAG_INVALID, /**< Invalid big number flag. */ CRYPT_BN_CONVERT_INPUT_INVALID, /**< Invalid input parameter of big number strings. */ CRYPT_BN_NOT_SUPPORT_EXTENSION, /**< The big number does not support dynamic extension. */ CRYPT_BN_INPUT_INVALID, /**< Invalid external big number input. */ CRYPT_BN_BITS_INVALID, /**< The bits of the big number exceeds the limit. */ CRYPT_BN_ERR_SWAP_LEN, /**< Data lengths are inconsistent when data is swapped with masks. */ CRYPT_RSA_BUFF_LEN_NOT_ENOUGH = 0x01030001, /**< The buffer length is insufficient. */ CRYPT_RSA_NO_KEY_INFO, /**< Lacks valid key information. */ CRYPT_RSA_ERR_KEY_BITS, /**< Incorrect key length. */ CRYPT_RSA_ERR_E_VALUE, /**< The value of parameter e is incorrect. */ CRYPT_RSA_NOR_KEYGEN_FAIL, /**< Key generation failure, it's normal error. */ CRYPT_RSA_NOR_VERIFY_FAIL, /**< Failed to verify the signature. it's normal error. */ CRYPT_RSA_ERR_ENC_BITS, /**< Incorrect length of the encrypted plaintext of the public key. */ CRYPT_RSA_ERR_DEC_BITS, /**< Incorrect length of the decrypted ciphertext of the private key. */ CRYPT_RSA_ERR_PSS_SALT_LEN, /**< Incorrect salt length of the PSS operation. */ CRYPT_RSA_ERR_PSS_SALT_DATA, /**< PSS operation salt data error, failed to compare the salt extracted during signature verification with the user's input. */ CRYPT_RSA_ERR_PKCSV15_SALT_LEN, /**< Incorrect salt length of the PKCSV15 operation. */ CRYPT_RSA_ERR_PKCSV15_SALT_DATA, /**< PKCSV15 salt data error. */ CRYPT_RSA_ERR_INPUT_VALUE, /**< Some special values, which are used as input errors. */ CRYPT_RSA_ERR_MD_ALGID, /**< The hash ID of the input parameter is incorrect when the pkcs1.5 padding mode is set. */ CRYPT_RSA_PAD_NO_SET_ERROR, /**< Padding information is not set when using RSA key for signature verification. */ CRYPT_RSA_CTRL_NOT_SUPPORT_ERROR, /**< The Ctrl type is not supported When RSA is used for Ctrl. */ CRYPT_RSA_SET_SALT_NOT_PSS_ERROR, /**< When the padding type of the key is not pss, and set the salt information, return failure. */ CRYPT_RSA_SET_EMS_PKCSV15_LEN_ERROR,/**< Sets the PKCSV15 padding information, the length of the input data is incorrect and return failure. */ CRYPT_RSA_SET_EMS_PSS_LEN_ERROR, /**< Sets the PSS padding information, the length of the input data is incorrect, and return failure. */ CRYPT_RSA_SET_RSAES_OAEP_LEN_ERROR, /**< Sets the OAEP padding information, the length of the input data is incorrect and return failure. */ CRYPT_RSA_SET_FLAG_LEN_ERROR, /**< The length of the input data is incorrect and return failure When sets the flag. */ CRYPT_RSA_FLAG_NOT_SUPPORT_ERROR, /**< Unsupported flag. */ CRYPT_RSA_ERR_SALT_LEN, /**< Salt length error. */ CRYPT_RSA_ERR_ALGID, /**< The hash ID of the input parameter is incorrect or conflict occurs when sets the signature, signature verification, and padding parameters. */ CRYPT_RSA_ERR_GEN_SALT, /**< An error is returned when salt information fails to be generated during PSS signature. */ CRYPT_RSA_ERR_ENC_INPUT_NOT_ENOUGH, /**< The plaintext length is too short for RSA NO PAD encryption. */ CRYPT_RSA_ERR_DATA_LEN, /**< Incorrect encryption length. */ CRYPT_RSA_ERR_PAD_NUM, /**< Incorrect padding length. */ CRYPT_RSA_PUBKEY_NOT_EQUAL, /**< RSA public keys are not equal. */ CRYPT_RSA_KEYPAIRWISE_CONSISTENCY_FAILURE, /**< RSA pair-wise consistency failure. */ CRYPT_RSA_ERR_BLIND_TYPE, /**< Invalid RSA blinding type. Only RSA-BSSA is currently supported. */ CRYPT_RSA_ERR_NO_BLIND_INFO, /**< RSA blinding information is missing. The blind/unblind operation requires previous blinding parameters. */ CRYPT_RSA_ERR_NO_PUBKEY_INFO, /**< The rsa pub key is missing. */ CRYPT_RSA_PADDING_NOT_SUPPORTED, /**< The specified RSA padding mode is not supported in blinding. */ CRYPT_RSA_ERR_BSSA_PARAM, /**< The param of bssa is not invalid. */ CRYPT_RSA_GET_SALT_LEN_ERROR, /**< The input length of getting salt-len is incorrect. */ CRYPT_RSA_GET_SALT_NOT_PSS_ERROR, /**< When the padding type of the key is not pss, and get the salt len. */ CRYPT_RSA_ERR_PSS_PARAMS, /**< The parameter is error when the padding type of the key is pss. */ CRYPT_RSA_ERR_NO_PRVKEY_INFO, /**< The rsa prv key is missing. */ CRYPT_RSA_ERR_INVALID_PRVKEY, /**< The private key is invalid. */ CRYPT_EAL_BUFF_LEN_NOT_ENOUGH = 0x01040001, /**< Insufficient buffer length. */ CRYPT_EAL_BUFF_LEN_TOO_LONG, /**< Insufficient buffer length. */ CRYPT_EAL_ERR_ALGID, /**< Incorrect algorithm ID. */ CRYPT_EAL_ALG_NOT_SUPPORT, /**< Algorithm not supported, algorithm behavior not supported. */ CRYPT_EAL_ERR_NEW_PARA_FAIL, /**< Failed to generate parameters. */ CRYPT_EAL_ERR_RAND_WORKING, /**< DRBG is in the working state. */ CRYPT_EAL_ERR_RAND_NO_WORKING, /**< DRBG is not working. */ CRYPT_EAL_ERR_METH_NULL_MEMBER, /**< The method variable member is NULL. */ CRYPT_EAL_ERR_GLOBAL_DRBG_NULL, /**< The global DRBG is null. */ CRYPT_EAL_ERR_DRBG_REPEAT_INIT, /**< DRBG is initialized repeatedly. */ CRYPT_EAL_ERR_DRBG_INIT_FAIL, /**< DRBG initialization failure. */ CRYPT_EAL_ERR_STATE, /**< The usage process is incorrect. For example, run the update command without running the init command. For details, see related algorithms. */ CRYPT_EAL_CIPHER_DATA_ERROR, /**< Data error occurs when unpadding the decrypted data. For X923, the last bit is the length of the original data, and the rest data is 0, if this requirement is not met, an error is reported. For pkcs, all padding data is (the length of the padding data - the length of the original data), if this requirement is not met,an error will be reported. For ISO7816, the first bit of padding data is 0x80, and the other bits are 0, if this requirement is not met, an error will be reported. */ CRYPT_EAL_PADDING_NOT_SUPPORT, /**< Unsupported padding. */ CRYPT_EAL_CIPHER_CTRL_ERROR, /**< CRYPT_EAL_CipherCtrl interface unsupported CTRL type. */ CRYPT_EAL_CIPHER_FINAL_WITH_AEAD_ERROR, /**< An error occurs when the final operation is performed on the AEAD algorithm. */ CRYPT_EAL_PKEY_CTRL_ERROR, /**< When the CRYPT_EAL_PkeyCtrl interface performs CTRL, the function is not supported or the input length is incorrect. */ CRYPT_EAL_MAC_CTRL_TYPE_ERROR, /**< When the CRYPT_EAL_PkeyCtrl interface performs CTRL, the function is not supported or the input length is incorrect. */ CRYPT_EAL_PKEY_DUP_ERROR, /**< Pkey context duplicate failure. */ CRYPT_EAL_PKEY_CMP_DIFF_KEY_TYPE, /**< Pkey comparison failure: different algorithm types. */ CRYPT_EAL_ERR_PART_OVERLAP, /**< Some memory overlap. */ CRYPT_EAL_INTO_TYPE_NOT_SUPPORT, /**< The info type is not supported. */ CRYPT_EAL_ALG_ASM_NOT_SUPPORT, /**< Algorithm assembly is not supported. */ CRYPT_EAL_CIPHER_ERR_NEWCTX, CRYPT_EAL_PKEY_CHECK_ERROR, /**< Pkey check failure. */ CRYPT_EAL_MD_METH_NULL, CRYPT_SHA2_INPUT_OVERFLOW = 0x01050001, /**< The length of the input data exceeds the maximum processing range of SHA2. */ CRYPT_SHA2_OUT_BUFF_LEN_NOT_ENOUGH, /**< The length of the buffer that storing the output result is insufficient. */ CRYPT_DRBG_ERR_STATE = 0x01060001, /**< DRBG status error. */ CRYPT_DRBG_FAIL_GET_ENTROPY, /**< Failed to obtain the entropy. */ CRYPT_DRBG_FAIL_GET_NONCE, /**< Failed to obtain the nonce. */ CRYPT_DRBG_ALG_NOT_SUPPORT, /**< Does not support the given algorithm. */ CRYPT_DRBG_INVALID_LEN, /**< Incorrect data length. */ CRYPT_DRBG_PARAM_ERROR, /**< Incorrect input parameter. */ CRYPT_CURVE25519_NO_PUBKEY = 0x01080001, /**< No public key. */ CRYPT_CURVE25519_NO_PRVKEY, /**< No private key. */ CRYPT_CURVE25519_KEYLEN_ERROR, /**< Incorrect key length. */ CRYPT_CURVE25519_SIGNLEN_ERROR, /**< Incorrect signature length. */ CRYPT_CURVE25519_HASH_METH_ERROR, /**< Hash method is not SHA512. */ CRYPT_CURVE25519_VERIFY_FAIL, /**< Signature verification fails due to incorrect signature. */ CRYPT_CURVE25519_NO_HASH_METHOD, /**< Hash method not set. */ CRYPT_CURVE25519_UNSUPPORTED_CTRL_OPTION, /**< Unsupported mode of operation. */ CRYPT_CURVE25519_KEY_COMPUTE_FAILED, /**< Failed to generate the shared key. */ CRYPT_CURVE25519_INVALID_PUBKEY, /**< Invalid public key. */ CRYPT_CURVE25519_PUBKEY_NOT_EQUAL, /**< Public keys are not equal. */ CRYPT_CURVE25519_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_CURVE25519_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_SHA1_INPUT_OVERFLOW = 0x01090001, /**< The length of the input data exceeds the maximum processing range of SHA1. */ CRYPT_SHA1_OUT_BUFF_LEN_NOT_ENOUGH, /**< The length of the buffer that storing the output result is insufficient. */ CRYPT_ENTROPY_RCT_FAILURE = 0x010A0001, /**< RCT detection fails, restart the entropy source. */ CRYPT_ENTROPY_APT_FAILURE, /**< APT detection fails, restart the entropy source. */ CRYPT_ENTROPY_CONDITION_FAILURE, /**< Processing method error after invoking. */ CRYPT_ENTROPY_RANGE_ERROR, /**< Entropy source generation range error */ CRYPT_ENTROPY_ECF_ALG_ERROR, /**< Entropy source conditioning algorithm is incorrect. */ CRYPT_ENTROPY_ECF_IS_ERROR, /**< Entropy source conditioning is incorrect. */ CRYPT_ENTROPY_ES_CREATE_ERROR, /**< Entropy pool creation error. */ CRYPT_ENTROPY_ES_STATE_ERROR, /**< Incorrect entropy pool status. */ CRYPT_ENTROPY_ES_CTRL_ERROR, /**< Incorrect entropy pool settings. */ CRYPT_ENTROPY_ES_NO_NS, /**< No available noise source in the entropy pool. */ CRYPT_ENTROPY_ES_NS_NOT_FOUND, /**< Noise source not found. */ CRYPT_ENTROPY_ES_DUP_NS, /**< Noise source Repetition. */ CRYPT_ENTROPY_ES_NS_NOT_AVA, /**< Noise source not available. */ CRYPT_ENTROPY_ES_NS_FULL, /**< Noise source list is full. */ CRYPT_ENTROPY_ES_CF_NOT_SUPPORT, /**< Nonditioning function not supported. */ CRYPT_ENTROPY_ES_CF_ERROR, /**< Nonditioning function error. */ CRYPT_ENTROPY_ES_ENTROPY_NOT_ENOUGH, /**< Not getting enough entropy. */ CRYPT_ENTROPY_ES_POOL_ERROR, /**< Entropy pool error. */ CRYPT_ENTROPY_ES_POOL_INSUFFICIENT, /**< Entropy pool capacity is insufficient. */ CRYPT_ENTROPY_CTRL_INVALID_PARAM, /**< Entropy invalid parameter. */ CRYPT_DSA_BUFF_LEN_NOT_ENOUGH = 0x010B0001, /**< Insufficient buffer length. */ CRYPT_DSA_ERR_KEY_PARA, /**< Incorrect key parameter data. */ CRYPT_DSA_ERR_KEY_INFO, /**< Incorrect key information. */ CRYPT_DSA_VERIFY_FAIL, /**< Verification failure. */ CRYPT_DSA_ERR_TRY_CNT, /**< Key generation and signature fail to be generated within the specified number of attempts. */ CRYPT_DSA_DECODE_FAIL, /**< Data decoding fails, the data does not meet the decoding requirements. */ CRYPT_DSA_UNSUPPORTED_CTRL_OPTION, /**< Unsupported mode of operation. */ CRYPT_DSA_PARA_ERROR, /**< The value of the key parameter does not meet the requirements. The ctx command does not contain necessary parameter information. */ CRYPT_DSA_PUBKEY_NOT_EQUAL, /**< Public keys are not equal. */ CRYPT_DSA_PARA_NOT_EQUAL, /**< Key parameters are not equal. */ CRYPT_DSA_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_DSA_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_HMAC_OUT_BUFF_LEN_NOT_ENOUGH = 0x010C0001, /**< The length of the buffer that storing the output result is insufficient. */ CRYPT_HMAC_ERR_UNSUPPORTED_CTRL_OPTION, /**< Unsupport the control type. */ CRYPT_HMAC_ERR_NO_MD_LIB_CTX, /**< MD library context not set. */ CRYPT_HMAC_PARAM_ERROR, /**< Incorrect input parameter. */ CRYPT_DH_BUFF_LEN_NOT_ENOUGH = 0x010D0001, /**< The buffer length is insufficient. */ CRYPT_DH_PARA_ERROR, /**< The value of the key parameter does not meet the requirements, the ctx command does not contain necessary parameter information. */ CRYPT_DH_KEYINFO_ERROR, /**< The value of the public and private keys do not meet the requirements, the ctx does not contain the necessary public and private keys. */ CRYPT_DH_RAND_GENERATE_ERROR, /**< Key generation fails within the specified number of attempts. */ CRYPT_DH_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_DH_UNSUPPORTED_CTRL_OPTION, /**< Unsupported mode of operation. */ CRYPT_DH_CREATE_PARA_FAIL, /**< Failed to create the p, q, and g parameters of the DH algorithm. */ CRYPT_DH_PUBKEY_NOT_EQUAL, /**< Public keys are not equal. */ CRYPT_DH_PARA_NOT_EQUAL, /**< DH key parameters are not equal. */ CRYPT_DH_SET_FLAG_LEN_ERROR, /**< The length of the input data is incorrect and return failure when setting the flag. */ CRYPT_DH_FLAG_NOT_SUPPORT_ERROR, /**< Unsupported flag. */ CRYPT_DH_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_CHACHA20_KEYLEN_ERROR = 0x010E0001, /**< The key length input is incorrect during key setting. */ CRYPT_CHACHA20_NONCELEN_ERROR, /**< The length of the input nounce is incorrect when you set the nounce. */ CRYPT_CHACHA20_COUNTLEN_ERROR, /**< The length of the input count is incorrect when you set the count. */ CRYPT_CHACHA20_NO_KEYINFO, /**< Lack of valid key information during encryption and decryption. */ CRYPT_CHACHA20_NO_NONCEINFO, /**< Lack of valid nounce information during encryption and decryption. */ CRYPT_CHACHA20_CTRLTYPE_ERROR, /**< The input type is not supported when the ctrl interface is used. */ CRYPT_AES_ERR_KEYLEN = 0x010F0001, /**< Incorrect key length. */ CRYPT_MODES_TAGLEN_ERROR = 0x01100001, /**< In AEAD mode, the length of the TAG is incorrect when the tag is obtained and verified. */ CRYPT_MODES_IVLEN_ERROR, /**< The length of the input IV is incorrect when setting the IV. */ CRYPT_MODES_KEYUSE_TOOMANY_TIME, /**< In GCM mode, the number of times that a key can be used for encryption and decryption is limited. When the number of times that a key is used exceeds the limit, an error is reported. */ CRYPT_MODES_CRYPTLEN_OVERFLOW, /**< In AEAD mode, the length of the plaintext or ciphertext input for a single encryption exceeds the limit. */ CRYPT_MODES_CTRL_TAGLEN_ERROR, /**< In GCM or CCM mode, the length of the input parameter or the length of the input parameter data is incorrect when the ctrl interface is used to set the tag length. */ CRYPT_MODES_AAD_REPEAT_SET_ERROR, /**< In the AEAD mode, the AAD information is set repeatedly. */ CRYPT_MODE_BUFF_LEN_NOT_ENOUGH, /**< The buffer length is insufficient. */ CRYPT_MODE_ERR_INPUT_LEN, /**< The function input length is not the expected length. */ CRYPT_MODES_CTRL_TYPE_ERROR, /**< The input type is not supported when the ctrl interface is used. */ CRYPT_MODES_AAD_IS_SET_ERROR, /**< In ccm mode, an error is returned when the tagLen and msgLen are set after the aad is set. */ CRYPT_MODES_MSGLEN_OVERFLOW, /**< In ccm mode, the length of the input message during encryption and decryption exceeds the set msgLen. */ CRYPT_MODES_CTRL_MSGLEN_ERROR, /**< In ccm mode, When the ctrl interface is used to set the msg length, the input parameter length or the input parameter data length is incorrect. (This specification is affected by ivLen.) */ CRYPT_MODES_MSGLEN_LEFT_ERROR, /**< In ccm mode, when the ctrl interface is used to obtain the tag, the length of the encrypted and decrypted messages does not reach the configured number. As a result, an error occurs. */ CRYPT_MODES_ERR_KEYLEN, /**< Incorrect key length set. */ CRYPT_MODES_ERR_KEY, /**< Incorrect key set. */ CRYPT_MODES_ERR_FEEDBACKSIZE, /**< The operation are not support by the algorithm on which the pattern depends on. */ CRYPT_MODES_METHODS_NOT_SUPPORT, /**< Mode depends does not support the behavior. */ CRYPT_MODES_FEEDBACKSIZE_NOT_SUPPORT, /**< The algorithm does not support the setting of feedbacksize. */ CRYPT_MODES_PADDING_NOT_SUPPORT, /**< Unsupported padding. */ CRYPT_HKDF_DKLEN_OVERFLOW = 0x01110001, /**< The length of the derived key exceeds the maximum. */ CRYPT_HKDF_NOT_SUPPORTED, /**< Unsupport HKDF algorithm. */ CRYPT_HKDF_PARAM_ERROR, /**< Incorrect input parameter. */ CRYPT_HKDF_ERR_MAC_ID_NOT_SET, /**< Mac id not set. */ CRYPT_HKDF_ERR_MAC_METH, /**< Mac method err. */ CRYPT_CMAC_OUT_BUFF_LEN_NOT_ENOUGH = 0x01120001, /**< The length of the buffer that storing the output result is insufficient. */ CRYPT_CMAC_INPUT_OVERFLOW, /**< The input length exceeds the limit. As a result, the integer type is reversed. */ CRYPT_CMAC_ERR_UNSUPPORTED_CTRL_OPTION, /**< Unsupport the control type. */ CRYPT_GMAC_ERR_UNSUPPORTED_CTRL_OPTION, /**< Unsupport the control type. */ CRYPT_SCRYPT_PARAM_ERROR = 0x01130001, /**< Incorrect input parameter. */ CRYPT_SCRYPT_NOT_SUPPORTED, /**< Unsupport the SCRYPT algorithm. */ CRYPT_SCRYPT_DATA_TOO_MAX, /**< The data calculated by the SCRYPT algorithm is too large. */ CRYPT_PBKDF2_PARAM_ERROR = 0x01150001, /**< Incorrect input parameter. */ CRYPT_PBKDF2_NOT_SUPPORTED, /**< Does not support the PBKDF2 algorithm. */ CRYPT_PBKDF2_ERR_MAC_METH, /**< Mac method err. */ CRYPT_PBKDF2_ERR_MAC_ID_NOT_SET, /**< Mac id not set. */ CRYPT_ECC_POINT_AT_INFINITY = 0x01160001, /**< Point at infinity. */ CRYPT_ECC_POINT_NOT_ON_CURVE, /**< Point is not on the curve. */ CRYPT_ECC_POINT_ERR_CURVE_ID, /**< Curve ID is inconsistent or incorrect. */ CRYPT_ECC_POINT_WINDOW_TOO_MAX, /**< Window is too max. */ CRYPT_ECC_POINT_NOT_EQUAL, /**< The two points are not equal. */ CRYPT_ECC_POINT_BLIND_WITH_ZERO, /**< The random number generated during point salting is 0. */ CRYPT_ECC_POINT_NOT_AFFINE, /**< Point is not affine coordinates. */ CRYPT_ECC_NOT_SUPPORT, /**< This function is not supported. */ CRYPT_ECC_POINT_MUL_ERR_K_LEN, /** The scalar length exceeds the curve specification when using the dot multiplication function */ CRYPT_ECC_BUFF_LEN_NOT_ENOUGH, /**< Insufficient buffer length. */ CRYPT_ECC_ERR_POINT_FORMAT, /**< The encoding format input during point encoding is incorrect. */ CRYPT_ECC_ERR_POINT_CODE, /**< Incorrect point code information. */ CRYPT_ECC_PKEY_ERR_UNSUPPORTED_CTRL_OPTION, /**< Unsupport the control type. */ CRYPT_ECC_PKEY_ERR_EMPTY_KEY, /**< Key is null. */ CRYPT_ECC_PKEY_ERR_INVALID_POINT_FORMAT, /**< Invalid dot format. */ CRYPT_ECC_PKEY_ERR_CTRL_LEN, /**< Control input parameter is incorrect. */ CRYPT_ECC_PKEY_ERR_INVALID_PRIVATE_KEY, /**< Invalid private key. */ CRYPT_ECC_PKEY_ERR_INVALID_PUBLIC_KEY, /**< Invalid public key. */ CRYPT_ECC_PKEY_ERR_TRY_CNT, /**< Key generation or generater signature fail within the specified number of attempts. */ CRYPT_ECC_PKEY_ERR_SIGN_LEN, /**< Invalid sign length */ CRYPT_ECC_ERR_PARA, /**< Incorrect curve parameter. */ CRYPT_ECC_INVERSE_INPUT_ZERO, /** Modulo inverse input is 0. */ CRYPT_ECC_KEY_PUBKEY_NOT_EQUAL, /**< ECC public keys are not equal. */ CRYPT_ECC_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_ECC_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_SHA3_OUT_BUFF_LEN_NOT_ENOUGH = 0x01170001, /**< Insufficient buffer length for storing output results. */ CRYPT_SHA3_INVALID_STATE, /**< Invalid state. */ CRYPT_ECDH_ERR_UNSUPPORT_CURVE_TYPE = 0x01180001, /**< Unsupported curve type. */ CRYPT_ECDH_ERR_EMPTY_KEY, /**< Key is null. */ CRYPT_ECDH_ERR_INVALID_COFACTOR, /**< Invalid cofactor value. */ CRYPT_ECDH_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_ECDH_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_ECDSA_ERR_EMPTY_KEY = 0x01190001, /**< Key is NULL. */ CRYPT_ECDSA_ERR_TRY_CNT, /**< Key generation and generate signature fail within the specified number of attempts. */ CRYPT_ECDSA_VERIFY_FAIL, /**< Verification failure. */ CRYPT_ECDSA_ERR_UNSUPPORTED_CTRL_OPTION, /**< Unsupport the control type. */ CRYPT_ECDSA_BUFF_LEN_NOT_ENOUGH, /**< BUFF insufficient length. */ CRYPT_ECDSA_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_ECDSA_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_SM3_INPUT_OVERFLOW = 0x011A0001, /**< The length of the input data exceeds the maximum processing range of the SM3. */ CRYPT_SM3_OUT_BUFF_LEN_NOT_ENOUGH, /**< The length of the buffer that storing the output result is insufficient. */ CRYPT_SM4_ERR_IV_LEN = 0x011B0001, /**< Wrong key length set. */ CRYPT_SM4_ERR_MSG_LEN, /**< Wrong data length is set. */ CRYPT_SM4_ERR_KEY_LEN, /**< Wrong key length is set. */ CRYPT_SM4_UNSAFE_KEY, /**< DataKey is the same as tweakKey. */ CRYPT_MD5_INPUT_OVERFLOW = 0x011D0001, /**< The length of the input data exceeds the maximum processing range of the MD5. */ CRYPT_MD5_OUT_BUFF_LEN_NOT_ENOUGH, /**< The length of the buffer that storing the output result is insufficient. */ CRYPT_MD_ERR_NEWCTX, /**< create md ctx failed. */ CRYPT_SM2_BUFF_LEN_NOT_ENOUGH = 0x01200001, /**< Insufficient buffer length. */ CRYPT_SM2_NO_PUBKEY, /**< SM2 the public key is not set. */ CRYPT_SM2_NO_PRVKEY, /**< SM2 The private key is not set. */ CRYPT_SM2_ERR_EMPTY_KEY, /**< SM2 key is null. */ CRYPT_SM2_ERR_TRY_CNT, /**< Key generation and generate signature fail within the specified number of attempts. */ CRYPT_SM2_VERIFY_FAIL, /**< verification failure. */ CRYPT_SM2_ERR_UNSUPPORTED_CTRL_OPTION, /**< Unsupported control type. */ CRYPT_SM2_ERR_NO_HASH_METHOD, /**< No hash method information. */ CRYPT_SM2_USERID_NOT_SET, /**< Unset userID. */ CRYPT_SM2_R_NOT_SET, /**< The peer R value is not set. */ CRYPT_SM2_INVALID_SERVER_TYPE, /**< The user is neither the initiator nor the recipient. */ CRYPT_SM2_ERR_CTRL_LEN, /**< Incorrect ctrl length. */ CRYPT_SM2_DECRYPT_FAIL, /**< Decryption failure. */ CRYPT_SM2_ERR_DATA_LEN, /**< Incorrect data length. */ CRYPT_SM2_ERR_GET_S, /**< Failed to obtain the checksum. */ CRYPT_SM2_ERR_S_NOT_SET, /**< Unset checksum. */ CRYPT_SM2_EXCH_VERIFY_FAIL, /**< Key Negotiation Failure. */ CRYPT_SM2_DECODE_FAIL, /**< Data decoding fails, the data does not meet the decoding requirements. */ CRYPT_SM2_ID_TOO_LARGE, /**< User id to large. */ CRYPT_SM2_K_REPEAT_SET_ERROR, /**< the random k is set repeatedly*/ CRYPT_SM2_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_SM2_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_KDFTLS12_NOT_SUPPORTED = 0x01210001, /**< Unsupport the KDFTLS12 algorithm. */ CRYPT_KDFTLS12_PARAM_ERROR, /**< Incorrect input parameter. */ CRYPT_KDFTLS12_ERR_MAC_METH, /**< Mac method err. */ CRYPT_KDFTLS12_ERR_MAC_ID_NOT_SET, /**< Mac id not set. */ CRYPT_SIPHASH_OUT_BUFF_LEN_NOT_ENOUGH = 0x01220001, /**< The buffer size for storing the output result is insufficient. */ CRYPT_SIPHASH_INPUT_OVERFLOW, CRYPT_SIPHASH_ERR_UNSUPPORTED_CTRL_OPTION, /**< Unsupport the control type. */ CRYPT_CBC_MAC_ERR_CTRL_LEN = 0x01240001, CRYPT_CBC_MAC_ERR_UNSUPPORTED_CTRL_OPTION, CRYPT_CBC_MAC_PADDING_NOT_SET, CRYPT_CBC_MAC_PADDING_NOT_SUPPORT, CRYPT_CBC_MAC_OUT_BUFF_LEN_NOT_ENOUGH, CRYPT_SEED_POOL_NEW_ERROR = 0x01290001, /**< The length of the key input is incorrect when setting the key. */ CRYPT_SEED_POOL_STATE_ERROR, /**< Incorrect seed pool status. */ CRYPT_SEED_POOL_ES_LIST_FULL, /**< The number of entropy sources exceeds the upper limit. */ CRYPT_SEED_POOL_NO_SUFFICIENT_ENTROPY, /**< The seed pool cannot provide sufficient entropy. */ CRYPT_SEED_POOL_NO_ENTROPY_SOURCE, /**< The seed pool has no entropy source. */ CRYPT_SEED_POOL_NO_ENTROPY_OBTAINED, /**< No entropy data is obtained from the seed pool. */ CRYPT_SEED_POOL_NOT_MEET_REQUIREMENT, /**< The entropy data does not meet the requirements. */ CRYPT_ENTROPY_CTX_CREATE_FAILED, /**< Failed to create the handle for obtaining the entropy. */ CRYPT_MLKEM_KEYLEN_ERROR = 0x01300001, /**< Incorrect input data length. */ CRYPT_MLKEM_LEN_NOT_ENOUGH, /**<The buffer size of output is insufficient. */ CRYPT_MLKEM_KEY_NOT_SET, /**<The encaps or decaps key not set. */ CRYPT_MLKEM_KEYINFO_NOT_SET, /**<The algorithm not set. */ CRYPT_MLKEM_KEY_NOT_EQUAL, /**< The MLKEM keys are not equal. */ CRYPT_MLKEM_CTRL_NOT_SUPPORT, /**< The Ctrl type is not supported.*/ CRYPT_MLKEM_CTRL_INIT_REPEATED, /**< The CTX cannot be initialized repeatedly.*/ CRYPT_MLKEM_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_MLKEM_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_HPKE_ERR_GEN_ASYM_KEY = 0x01310001, /**< HPKE Generate asymmetric key error. */ CRYPT_HPKE_ERR_AEAD_TAG, /**< Failed to verify AEAD tag when decrypt. */ CRYPT_HPKE_ERR_CALL, /**< It is not appropriate to call this function. */ CRYPT_HPKE_FAILED_FETCH_CIPHER, /**< Failed to fetch cipher. */ CRYPT_HPKE_FAILED_FETCH_PKEY, /**< Failed to fetch pkey. */ CRYPT_HPKE_FAILED_FETCH_KDF, /**< Failed to fetch kdf. */ CRYPT_DECODE_ASN1_BUFF_NUM_NOT_ENOUGH = 0x01320001, /**< The input number of BSL_ANS1_Buffer is not enough. */ CRYPT_DECODE_UNSUPPORTED_PUBKEY_TYPE, /**< Unsupported pubkey type */ CRYPT_DECODE_UNSUPPORTED_PKCS8_TYPE, /**< Unsupported pkcs8 type */ CRYPT_DECODE_PKCS8_INVALID_ALGO_PARAM, /**< pkcs8 has no valid algorithm parameters */ CRYPT_DECODE_UNKNOWN_OID, /**< Unknown OID */ CRYPT_DECODE_ASN1_BUFF_FAILED, /**< decode asn1 buffer failed. */ CRYPT_DECODE_NO_SUPPORT_TYPE, /**< decode no support key type. */ CRYPT_DECODE_NO_SUPPORT_FORMAT, /**< decode no support key format. */ CRYPT_DECODE_PKCS8_INVALID_ITER, /**< pkcs8 invalid iter num */ CRYPT_DECODE_PKCS8_INVALID_KEYLEN, /**< pkcs8 invalid keylen */ CRYPT_DECODE_ERR_RSSPSS_GET_ANY_TAG, /**< decode rsapss param failed. */ CRYPT_DECODE_ERR_RSSPSS, /**< decode rsapss param failed. */ CRYPT_DECODE_ERR_RSSPSS_MD, /**< rsapss md is invalid. */ CRYPT_DECODE_ERR_RSSPSS_MGF1MD, /**< rsapss mgf1md is invalid. */ CRYPT_DECODE_ERR_RSSPSS_TRAILER, /**< rsapss trailer field is invalid. */ CRYPT_DECODE_PKCS7_INVALIDE_ENCRYPTDATA_TYPE, /**< Invaild pkcs7-encryptedData. */ CRYPT_DECODE_UNSUPPORTED_PKCS7_TYPE, /**< Unsupported pkcs7 type */ CRYPT_DECODE_UNSUPPORTED_ENCRYPT_TYPE, /**< Unsupported encrypt type */ CRYPT_DECODE_BUFF_NOT_ENOUGH, /**< The input buffer space is not enough */ CRYPT_DECODE_ASN1_BUFF_LEN_ZERO, /**< The decoding length of asn1 buffer is zero. */ CRYPT_DECODE_ERR_NO_DECODER, /**< No decoder found. */ CRYPT_DECODE_ERR_NO_USABLE_DECODER, /**< No decoder found. */ CRYPT_DECODE_RETRY, /**< Retry decode. */ CRYPT_DECODE_ERR_CURR_NODE_NOT_FOUND, /**< Current node not found. */ CRYPT_DECODE_ERR_NO_KEY_TYPE, /**< No key type found. */ CRYPT_DECODE_ERR_KEY_TYPE_NOT_MATCH, /**< Key type not match. */ CRYPT_ENCODE_NO_SUPPORT_TYPE = 0x01330001, /**< encode no support key type. */ CRYPT_ENCODE_NO_SUPPORT_FORMAT, /**< encode no support key format. */ CRYPT_ENCODE_ERR_RSA_PAD, /**< rsa pad err. */ CRYPT_ENCODE_BUFF_NOT_ENOUGH, /**< The input buffer space is not enough */ CRYPT_ENCODE_ERR_SIGN_LEN_OVERFLOW, /**< The r and s length is too large. */ CRYPT_ENCODE_ERR_SM2_ENCRYPT_DATA_LEN_OVERFLOW, /**< The sm2 encrypt data length is too large. */ CRYPT_DECODE_PRINT_UNSUPPORT_ALG = 0x01340001, /**< Failed to print unsupported alg. */ CRYPT_DECODE_PRINT_NO_KEY, /**< Failed to print key. */ CRYPT_DECODE_PRINT_KEYBITS, /**< Failed to print key bist. */ CRYPT_DECODE_PRINT_MODULUS, /**< Failed to print modulus. */ CRYPT_DECODE_PRINT_EXPONENT, /**< Failed to print exponent. */ CRYPT_DECODE_PRINT_RSAPSS_PARA, /**< Failed to print rsapss para. */ CRYPT_DECODE_PRINT_ECC_PUB, /**< Failed to print ecc pubkey. */ CRYPT_DECODE_PRINT_ECC_OID, /**< Failed to print ecc oid. */ CRYPT_PROVIDER_ERR_UNEXPECTED_IMPL = 0x01350001, /**< Unexpected impl */ CRYPT_PROVIDER_ERR_IMPL_NULL, CRYPT_PROVIDER_NOT_FOUND, /**< Provider not found. */ CRYPT_PROVIDER_NOT_SUPPORT, CRYPT_PROVIDER_ERR_ATTRIBUTE, CRYPT_PROVIDER_INVALID_LIB_CTX, CRYPT_MLDSA_KEYINFO_NOT_SET = 0x01360001, /**< The algorithm not set. */ CRYPT_MLDSA_CTRL_NOT_SUPPORT, /**< The Ctrl type is not supported. */ CRYPT_MLDSA_PAD_TOO_LONG, /**< The pad is too long. */ CRYPT_MLDSA_KEYLEN_ERROR, /**< Incorrect input data length. */ CRYPT_MLDSA_SIGN_DATA_ERROR, /**< Invalid signature value. */ CRYPT_MLDSA_VERIFY_FAIL, /**< Failed to verify the signature. */ CRYPT_MLDSA_KEY_NOT_SET, /**< The public key or private not set. */ CRYPT_MLDSA_LEN_NOT_ENOUGH, /**< The buffer size of output is insufficient. */ CRYPT_MLDSA_KEY_NOT_EQUAL, /**< The MLDSA keys are not equal. */ CRYPT_MLDSA_CTRL_INIT_REPEATED, /**< The CTX cannot be initialized repeatedly.*/ CRYPT_MLDSA_SET_KEY_FAILED, /**< Failed to set the key. */ CRYPT_MLDSA_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_MLDSA_INVALID_PRVKEY, /**< Invalid private key. */ CRYPT_MLDSA_INVALID_PUBKEY, /**< Invalid public key. */ CRYPT_ELGAMAL_BUFF_LEN_NOT_ENOUGH = 0x01370001, /**< The buffer length is insufficient. */ CRYPT_ELGAMAL_NO_KEY_INFO, /**< Lacks valid key information. */ CRYPT_ELGAMAL_ERR_KEY_BITS, /**< Incorrect key length. */ CRYPT_ELGAMAL_ERR_ENC_BITS, /**< Incorrect length of the encrypted plaintext of the public key. */ CRYPT_ELGAMAL_ERR_DEC_BITS, /**< Incorrect length of the decrypted ciphertext of the private key. */ CRYPT_ELGAMAL_ERR_KEY_KBITS, /**< Incorrect key length. */ CRYPT_ELGAMAL_ERR_KEY_BITS_KBITS, /**< Incorrect key length. */ CRYPT_ELGAMAL_ERR_ENC_KBITS, /**< Incorrect length of the encrypted plaintext of the public key. */ CRYPT_ELGAMAL_ERR_DEC_KBITS, /**< Incorrect length of the decrypted ciphertext of the private key. */ CRYPT_ELGAMAL_ERR_INPUT_VALUE, /**< Some special values, which are used as input errors. */ CRYPT_ELGAMAL_CTRL_NOT_SUPPORT_ERROR, /**< The Ctrl type is not supported When elgamal is used for Ctrl. */ CRYPT_SLHDSA_ERR_INVALID_ALGID = 0x01380001, /**< The algorithm id is invalid. */ CRYPT_SLHDSA_ERR_INVALID_SIG_LEN, /**< The signature length is invalid. */ CRYPT_SLHDSA_ERR_INVALID_KEYLEN, /**< The key length is invalid. */ CRYPT_SLHDSA_ERR_SIG_LEN_NOT_ENOUGH, /**< The signature length is not enough. */ CRYPT_SLHDSA_ERR_HYPERTREE_VERIFY_FAIL, /**< Hypertree verify failed. */ CRYPT_SLHDSA_ERR_PREHASH_ID_NOT_SUPPORTED, /**< Prehash id is not supported. */ CRYPT_SLHDSA_ERR_CONTEXT_LEN_OVERFLOW, /**< Context length is overflow. */ CRYPT_SLHDSA_PAIRWISE_CHECK_FAIL, /**< The public and private keys are inconsistent. */ CRYPT_SLHDSA_ERR_NO_PUBKEY, /**< No public key. */ CRYPT_SLHDSA_ERR_NO_PRVKEY, /**< No private key. */ CRYPT_PAILLIER_BUFF_LEN_NOT_ENOUGH = 0x01390001, /**< The buffer length is insufficient. */ CRYPT_PAILLIER_NO_KEY_INFO, /**< Lacks valid key information. */ CRYPT_PAILLIER_ERR_KEY_BITS, /**< Incorrect key length. */ CRYPT_PAILLIER_ERR_ENC_BITS, /**< Incorrect length of the encrypted plaintext of the public key. */ CRYPT_PAILLIER_ERR_DEC_BITS, /**< Incorrect length of the decrypted ciphertext of the private key. */ CRYPT_PAILLIER_ERR_INPUT_VALUE, /**< Some special values, which are used as input errors. */ CRYPT_PAILLIER_CTRL_NOT_SUPPORT_ERROR, /**< The Ctrl type is not supported When paillier is used for Ctrl. */ CRYPT_XMSS_ERR_INVALID_ALGID = 0x013A0001, /**< The algorithm id is invalid. */ CRYPT_XMSS_ERR_INVALID_SIG_LEN, /**< The signature length is invalid. */ CRYPT_XMSS_ERR_INVALID_KEYLEN, /**< The key length is invalid. */ CRYPT_XMSS_ERR_KEY_EXPIRED, /**< The key has expired. */ CRYPT_CMVP_COMMON_ERR = 0x013B0001, /**< Common error in CMVP selftest. */ CRYPT_CMVP_ERR_INTEGRITY, /**< Integrity error in CMVP selftest. */ CRYPT_CMVP_RANDOMNESS_ERR, /**< Randomness error in CMVP selftest. */ CRYPT_CMVP_ERR_ALGO_SELFTEST, /**< Algorithm selftest error in CMVP selftest. */ CRYPT_CMVP_ERR_PAIRWISETEST, /**< Pairwise test error in CMVP selftest. */ CRYPT_CMVP_ERR_PARAM_CHECK, /**< Parameter check error in CMVP selftest. */ }; #ifdef __cplusplus } #endif #endif // CRYPT_ERRNO_H /home/wsk/Desktop/openhitls/testcode/demo/RSA-2048.c: In function ‘main’: /home/wsk/Desktop/openhitls/testcode/demo/RSA-2048.c:46:35: error: ‘CRYPT_CTRL_SET_RSA_MODULUS_BITS’ undeclared (first use in this function); did you mean ‘CRYPT_CTRL_GET_ECC_ORDER_BITS’? 46 | ret = CRYPT_EAL_PkeyCtrl(ctx, CRYPT_CTRL_SET_RSA_MODULUS_BITS, NULL, 2048); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | CRYPT_CTRL_GET_ECC_ORDER_BITS /home/wsk/Desktop/openhitls/testcode/demo/RSA-2048.c:46:35: note: each undeclared identifier is reported only once for each function it appears in [ 80%] Built target sm2sign [ 88%] Built target ecdh make[2]: *** [CMakeFiles/RSA-2048.dir/build.make:82: CMakeFiles/RSA-2048.dir/RSA-2048.c.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:227: CMakeFiles/RSA-2048.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... [ 96%] Built target sm2enc make: *** [Makefile:103: all] Error 2
09-07
import os import sys import time import warnings from typing import List, Dict, Any, Union import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import interpolate import mph from sko.GA import GA from sko.PSO import PSO from sko.DE import DE warnings.filterwarnings('ignore') # Suppress unnecessary warnings plt.ion() # Enable interactive mode for matplotlib # model.study("std1").feature("time").set("tlist", "range(0,10,15000)"); #==================== Parameter Definitions =================================== OP='DE' # Optimization method: GA, PSO, DE C_Factor=3.6 # Coulomp to mAh transform Factor U_column=2 # Column index of voltage data in xlsx file,array index starts from 0 Eneg_column=1 # Column index of Eneg data in xlsx file,array index starts from 0 C_rate=[0.33,0.5,1.0,2.0,3.0,3.5] # C-rates for test data max_iter=20 # Optimization maximum iterations size_pop=40 # Optimization population size or particle numb WEIGHT_FACTOR_Eneg = 0.5 # Weight factor for voltage error calculation,1 for only voltage error WEIGHT_FACTOR_Cap = -0.3 # Weight factor for voltage error calculation,1 for only voltage error MODEL_NAME ='opt_test1_noT.mph' # COMSOL model file name Sim_Str=['Caps', 'Ecell', 'Eneg'] # Simulation variables: time, cell voltage, negative electrode potential Time_STR="range(0,10,15000)" # Time list for simulation Eneg_Bool=True # Boolean flag to include negative electrode potential Temp_func_Bool=False # Boolean flag to use test Temp data Current_func_Bool=False # Boolean flag to use test current data Temp_func="int3" # Temperature function name in COMSOL model Current_func="int4" # Current function name in COMSOL model INPUT_DIR = './input' # Input data directory OUTPUT_DIR = './output' # Output data directory OCV_NAME ='CELL_OCV.xlsx' # OCV data header OCV_FILE = os.path.join(INPUT_DIR, OCV_NAME) # OCV data file MODEL_PATH = os.path.join(os.getcwd(), MODEL_NAME) # COMSOL model file path # Parameter bounds for genetic algorithm optimization # Format: 'parameter_name': (lower_bound, upper_bound) PARAM_BOUNDS = { 'brugl_pos': (1.5, 2.5), 'brugl_neg': (1.5, 2.5), 'k_pos1': (1e-11, 1e-9), 'k_neg1': (1e-10, 1e-8), 'Ds_neg1': (1e-14, 5e-12), 'Ds_pos1': (1e-16, 1e-13), 'Dl_1': (1e-11, 5e-9), # 'Ea_Ds_pos': (10000, 100000), 'Ea_Ds_neg': (10000, 100000), 'Ea_Dl': (10000, 100000), 'Ea_k_pos': (10000, 100000), 'Ea_k_neg': (10000, 100000), } PARAM_UNITS = { # Define parameter units for proper COMSOL formatting 'brugl_pos': '', 'brugl_neg': '', 'k_pos1': '[m/s]', 'k_neg1': '[m/s]', 'Ds_neg1': '[m^2/s]', 'Ds_pos1': '[m^2/s]', 'Dl_1': '[m^2/s]', # 'Ea_Ds_pos': '[J/mol]', 'Ea_Ds_neg': '[J/mol]', 'Ea_Dl': '[J/mol]', 'Ea_k_pos': '[J/mol]', 'Ea_k_neg': '[J/mol]', } # Common bounds for all algorithms lb = [bounds[0] for bounds in PARAM_BOUNDS.values()] ub = [bounds[1] for bounds in PARAM_BOUNDS.values()] n_dim = len(PARAM_BOUNDS) #========================== Algorithm parameter configurations ================ GA_PARAMS = { 'func': None, # Will be set dynamically 'n_dim': n_dim, 'size_pop': size_pop, 'max_iter': max_iter, 'prob_mut': 0.01, 'lb': lb, 'ub': ub } PSO_PARAMS = { 'func': None, # Will be set dynamically 'n_dim': n_dim, 'pop': size_pop, # Size of population/particles (consistent with GA parameter naming) 'max_iter': max_iter, 'w': 0.8, # Inertia weight 'c1': 0.5, # Cognitive parameter 'c2': 0.5, # Social parameter 'lb': lb, 'ub': ub } DE_PARAMS = { 'func': None, # Will be set dynamically 'n_dim': n_dim, 'size_pop': size_pop, 'max_iter': max_iter, 'lb': lb, 'ub': ub, 'F': 0.5, # Mutation factor 'prob_mut': 0.3 # Mutation probability (DE uses 'prob_mut' instead of 'CR') } #========================== Global Variables ================================== model = None # COMSOL model instance client = None # COMSOL client instance pymodel = None # PyCOMSOL model interface data_input = [] # List to store input datasets SOC_rate = [] # List to store SOC values for each dataset in_optimization_loop = True # Flag to track if we're in the main optimization loop #========================== Helper Functions ================================== def load_input_data() -> List[Dict[str, Any]]: """Load input data files from the INPUT_DIR. Returns: List[Dict[str, Any]]: List of dictionaries containing filename and data for each Excel file """ global data_input, SOC_rate # Get all Excel files in the input directory, excluding OCV files excel_files = [f for f in os.listdir(INPUT_DIR) if f.endswith('.xlsx') and 'OCV' not in f.upper()] data_input = [] # Reset data_input list for filename in excel_files: # Load each Excel file file_path = os.path.join(INPUT_DIR, filename) try: df = pd.read_excel(file_path) # Read Excel file data_array = df.values # Convert to numpy array data_input.append({ # Add to data_input list with metadata 'filename': filename, 'data': data_array }) print(f"Successfully loaded file: {filename}, data shape: {data_array.shape}") except Exception as e: print(f"Failed to load file {filename}: {str(e)}") if data_input: # Calculate initial SOC values calculate_initial_soc() return data_input def calculate_initial_soc() -> None: """Calculate initial State of Charge (SOC) for each dataset using OCV lookup.""" global SOC_rate SOC_rate = [] # Reset SOC_rate list try: # Read OCV data from Excel file ocv_df = pd.read_excel(OCV_FILE) ocv_data = ocv_df.values # Convert to numpy array unique_ocv = {} # Store unique OCV data points to avoid duplicates for row in ocv_data: if row[1] not in unique_ocv: unique_ocv[row[1]] = row[0] sorted_ocv = np.array([[v, k] for k, v in unique_ocv.items()]) # Create and sort OCV data for interpolation sorted_ocv = sorted_ocv[np.argsort(sorted_ocv[:, 1])] # Sort by voltage for data_item in data_input: if data_item['data'].shape[1] >= 3: # Ensure data has enough columns initial_voltage = data_item['data'][0, int(U_column)] # Column index int(U_column) is Ecell (Cell Voltage) f = interpolate.interp1d(sorted_ocv[:, 1], sorted_ocv[:, 0], # Create interpolation function for OCV-SOC relationship bounds_error=False,fill_value="extrapolate") soc = float(f(initial_voltage)) # Convert to scalar SOC_rate.append(soc) print(f"Initial SOC for file {data_item['filename']}: {soc}") except Exception as e: print(f"Failed to calculate initial SOC: {str(e)}") SOC_rate = [0.01] * len(data_input) # Set default SOC value for each dataset print(f"Using default SOC value of 0.01 for all datasets") def initialize_comsol_model() -> bool: """Initialize COMSOL model by starting client and loading the model file. Returns: bool: True if initialization successful, False otherwise """ global client, pymodel, model try: print("Starting COMSOL client...") client = mph.start() print("Loading COMSOL model...") # Check if model file exists before attempting to load if os.path.exists(MODEL_PATH): pymodel = client.load(MODEL_PATH) print("Creating Java Object...") model = pymodel.java print("COMSOL model loaded successfully") return True else: print(f"Model file not found: {MODEL_PATH}") # Clean up client if model loading fails client = None return False except Exception as e: print(f"Failed to initialize COMSOL model: {str(e)}") client = None # Ensure resources are released on failure pymodel = None model = None return False def set_model_parameters(params: Union[List[float], np.ndarray]) -> bool: """Set parameters in the COMSOL model with proper unit handling. Args: params: List or array of parameter values in the same order as PARAM_BOUNDS keys Returns: bool: True if parameters were set successfully, False otherwise """ global model try: if model is None: # Ensure model is initialized print("Cannot set parameters: COMSOL model not initialized") return False param_names = list(PARAM_BOUNDS.keys()) # Get parameter names from PARAM_BOUNDS if len(params) != len(param_names): # Verify parameter count matches expected number print(f"Parameter count mismatch: expected {len(param_names)}, got {len(params)}") return False for i, param_name in enumerate(param_names): # Set each parameter in the model with appropriate unit param_value = float(params[i]) # Ensure numeric value unit = PARAM_UNITS[param_name] # Get unit from dictionary if unit: # Set parameter with or without unit as needed model.param().set(param_name, f'{param_value}{unit}') else: model.param().set(param_name, f'{param_value}') return True except Exception as e: print(f"Failed to set model parameters: {str(e)}") return False def calculate_rmse(params: Union[List[float], np.ndarray]) -> float: """Calculate Root Mean Square Error (RMSE) between experimental and simulation data. Args: params: List or array of model parameters to evaluate Returns: float: Average RMSE value across all datasets in mV. Returns 666 (large value) in case of errors to indicate optimization failure. """ global model, data_input, SOC_rate, in_optimization_loop try: if not set_model_parameters(params): # Set model parameters and check if successful print("Failed to set model parameters in calculate_rmse") # When Set Success , Return Ture,Not True=Faluse, Not In Loop return 666 # Return large value to indicate failure RMSE_values = [] # Initialize storage for RMSE values n = len(data_input) for i in range(n): # Run Process each dataset try: # model.param().set('SOC', f'{SOC_rate[i]}') # Set initial SOC value for current dataset data_array = data_input[i]['data'] # Get input data for current dataset if data_array.shape[1] >= 3: # Ensure input data has sufficient columns base_name = os.path.splitext(data_input[i]['filename'])[0] # Get base name directly from data_input if Temp_func_Bool: temp_file_path = os.path.join(INPUT_DIR, f"{base_name}_Temp.txt") # Dynamic temp file path model.component("comp1").func(Temp_func).set("filename", temp_file_path) if Current_func_Bool: current_file_path = os.path.join(INPUT_DIR, f"{base_name}_Current.txt") # Dynamic current file path model.component("comp1").func(Current_func).set("filename", current_file_path) else: # else set C parameter model.param().set("C", str(C_rate[i])) # c_value = base_name.replace("C", "") print(f"Set C parameter to {C_rate[i]} for dataset{data_input[i]['filename']}") model.study("std1").feature("time").set("tlist", Time_STR); # Set time list for current dataset model.sol("sol1").runAll() # Run COMSOL simulation # try: # Extract simulation data from COMSOL sim_data = pymodel.evaluate(Sim_Str) # Evaluate simulation variables sim_Time = sim_data[:, 0]/C_Factor # Time in seconds or Capacity in mAh sim_Ecell = sim_data[:, 1] # Cell voltage data sim_Eneg_10 = sim_data[:, 2] # Negative electrode potential data # except Exception as inner_e: # print(f"Failed to use evaluate method for dataset {data_input[i]['filename']}: {str(inner_e)}") # continue else: print(f"Dataset {data_input[i]['filename']} does not have sufficient columns for processing") continue exp_Time = data_array[:, 0] # Experimental time data exp_Ecell = data_array[:, int(U_column)] # Experimental cell voltage if Eneg_Bool: exp_Eneg_10 = data_array[:, int(Eneg_column)] # Experimental negative electrode potential t_min = min(max(exp_Time), max(sim_Time)) # Determine interpolation time range t_min_array = np.linspace(0, t_min, 300) # Create uniform time points for interpolation # Create interpolation functions for simulation and experimental data f_sim_Ecell = interpolate.interp1d(sim_Time, sim_Ecell, bounds_error=False, fill_value="extrapolate") f_exp_Ecell = interpolate.interp1d(exp_Time, exp_Ecell, bounds_error=False, fill_value="extrapolate") f_sim_Eneg_10 = interpolate.interp1d(sim_Time, sim_Eneg_10, bounds_error=False, fill_value="extrapolate") # Interpolate data to match time points sim_interp_Ecell = f_sim_Ecell(t_min_array) exp_interp_Ecell = f_exp_Ecell(t_min_array) sim_interp_Eneg_10 = f_sim_Eneg_10(t_min_array) # Calculate RMSE values in millivolts diff_Ecell = (exp_interp_Ecell - sim_interp_Ecell) * 1000 # Convert to mV RMSE_Ecell = np.sqrt(np.mean(diff_Ecell**2)) # Voltage RMSE if Eneg_Bool: f_exp_Eneg_10 = interpolate.interp1d(exp_Time, exp_Eneg_10, bounds_error=False, fill_value="extrapolate") exp_interp_Eneg_10 = f_exp_Eneg_10(t_min_array) diff_Eneg_10 = (exp_interp_Eneg_10 - sim_interp_Eneg_10) * 1000 # Convert to mV RMSE_Eneg_10 = np.sqrt(np.mean(diff_Eneg_10**2)) # Negative potential RMSE else: RMSE_Eneg_10 = 0 # Calculate weighted average of RMSE values if WEIGHT_FACTOR_Cap > 0: RMSE_Cap=np.sqrt((exp_Time[-1]-sim_Time[-1])**2) # Capacity RMSE RMSE_val = RMSE_Ecell * (1-WEIGHT_FACTOR_Eneg-WEIGHT_FACTOR_Cap) + RMSE_Eneg_10 *WEIGHT_FACTOR_Eneg+RMSE_Cap*WEIGHT_FACTOR_Cap RMSE_values.append(RMSE_val) else: RMSE_val = RMSE_Ecell * (1-WEIGHT_FACTOR_Eneg) + RMSE_Eneg_10 *WEIGHT_FACTOR_Eneg RMSE_values.append(RMSE_val) rmse_val_scalar = float(RMSE_val[0]) if isinstance(RMSE_val, np.ndarray) else float(RMSE_val) print(f"RMSE for dataset {data_input[i]['filename']} : {rmse_val_scalar:.4f} mV") # Convert to scalar if necessary for proper printing except Exception as e: print(f"Error calculating RMSE for dataset {data_input[i]['filename']}: {str(e)}") # RMSE_values.append(666) # Append large value to indicate dataset failure print(f"Error calculating RMSE for dataset {data_input[i]['filename']}:Re-Start Calculattion") return 666 if RMSE_values: # Calculate and return average RMSE avg_rmse = np.mean(RMSE_values) avg_rmse_scalar = float(avg_rmse[0]) if isinstance(avg_rmse, np.ndarray) else float(avg_rmse) print(f"Average RMSE for dataset: {avg_rmse_scalar:.4f} mV") return avg_rmse else: print("No valid RMSE values calculated") return 666 except Exception as e: print(f"Error occurred while calculating RMSE: {str(e)}") return 666 def run_optimization(func, op_type='GA') -> tuple: """Run optimization using the specified algorithm. Args: func: The objective function to minimize op_type: Algorithm type ('GA', 'PSO', 'DE') Returns: tuple: (best_params, best_rmse, algorithm_instance) """ global in_optimization_loop # Set the objective function in the appropriate parameter dictionary if op_type == 'GA': params = GA_PARAMS.copy() params['func'] = func optimizer = GA(**params) print(f"Running Genetic Algorithm (GA) optimization...") print(f"Population size: {params['size_pop']}") print(f"Maximum iterations: {params['max_iter']}") print(f"Mutation probability: {params['prob_mut']}") elif op_type == 'PSO': params = PSO_PARAMS.copy() params['func'] = func optimizer = PSO(**params) print(f"Running Particle Swarm Optimization (PSO)...") print(f"Swarm size: {params['pop']}") print(f"Maximum iterations: {params['max_iter']}") print(f"Inertia weight (w): {params['w']}") print(f"Cognitive parameter (c1): {params['c1']}") print(f"Social parameter (c2): {params['c2']}") elif op_type == 'DE': params = DE_PARAMS.copy() params['func'] = func optimizer = DE(**params) print(f"Running Differential Evolution (DE)...") print(f"Population size: {params['size_pop']}") print(f"Maximum iterations: {params['max_iter']}") print(f"Mutation factor (F): {params['F']}") print(f"Mutation probability: {params['prob_mut']}") else: raise ValueError(f"Unsupported optimization algorithm: {op_type}") print(f"Number of parameters to optimize: {params['n_dim']}") print("="*60) try: best_params, best_rmse = optimizer.run() return best_params, best_rmse, optimizer except KeyboardInterrupt: print("\nWARNING: Optimization interrupted by user") return None, None, optimizer except Exception as e: print(f"ERROR: Optimization failed: {str(e)}") return None, None, None def plot_optimization_results(op_instance: Any, op_type='GA') -> None: """Plot and save optimization convergence curve. Args: op_instance: The optimization instance containing generation data op_type: Algorithm type ('GA', 'PSO', 'DE') """ plt.figure(figsize=(10, 6)) # Get convergence data based on algorithm type try: if op_type == 'GA': convergence_data = op_instance.generation_best_Y elif op_type == 'PSO': convergence_data = op_instance.gbest_y_hist elif op_type == 'DE': convergence_data = op_instance.generation_best_Y else: # Default fallback for any other algorithm convergence_data = getattr(op_instance, 'generation_best_Y', []) if not convergence_data: convergence_data = getattr(op_instance, 'gbest_y_history', []) if convergence_data: plt.plot(convergence_data) plt.xlabel('Number of Generations' if 'generation' in str(type(convergence_data)) else 'Iterations') plt.ylabel('RMSE Value (mV)') plt.title(f'{op_type} Optimization Convergence Curve') plt.grid(True) # Save with algorithm-specific filename filename = f'{op_type.lower()}_convergence_curve.png' plt.savefig(os.path.join(OUTPUT_DIR, filename), dpi=300) plt.show(block=False) print(f"Convergence plot saved as: {filename}") else: print(f"Warning: No convergence data found for {op_type}") except Exception as e: print(f"Error plotting convergence curve for {op_type}: {str(e)}") def save_results(op_instance, op_type='GA', best_params=None, best_rmse=None) -> None: """Save optimization results to output files. Args: op_instance: The optimization instance containing results op_type: Algorithm type ('GA', 'PSO', 'DE') best_params: Pre-computed best parameters (optional) best_rmse: Pre-computed best RMSE (optional) """ try: # Try to get best parameters and RMSE if best_params is None or best_rmse is None: if op_type == 'GA': if best_params is None and hasattr(op_instance, 'generation_best_X'): best_params = op_instance.generation_best_X[-1] if best_rmse is None and hasattr(op_instance, 'generation_best_Y'): best_rmse = op_instance.generation_best_Y[-1] elif op_type == 'PSO': if best_params is None and hasattr(op_instance, 'gbest_x'): best_params = op_instance.gbest_x if best_rmse is None and hasattr(op_instance, 'gbest_y'): best_rmse = op_instance.gbest_y elif op_type == 'DE': if best_params is None and hasattr(op_instance, 'generation_best_X'): best_params = op_instance.generation_best_X[-1] if best_rmse is None and hasattr(op_instance, 'generation_best_Y'): best_rmse = op_instance.generation_best_Y[-1] # Ensure we have valid results if best_params is None or best_rmse is None: raise ValueError(f"Could not retrieve optimization results from {op_type} instance") # Save results with algorithm-specific filename output_file_path = os.path.join(OUTPUT_DIR, f'python_{op_type.lower()}_results.txt') with open(output_file_path, 'w') as f: f.write(f"Optimization Algorithm: {op_type}\n") f.write(f"Best RMSE value: {float(best_rmse):.6f} mV\n\n") f.write("Best parameter values:\n") for i, param_name in enumerate(list(PARAM_BOUNDS.keys())): param_val = float(best_params[i]) f.write(f"{param_name}: {param_val:.6e}\n") print(f"Optimization results saved to: {output_file_path}") except Exception as e: print(f"Error saving optimization results for {op_type}: {str(e)}") def generate_comparison_plots_with_best_params(best_params_op): final_rmse = None try: final_rmse = calculate_rmse(best_params_op) except Exception as rse: print(f"WARNING: Failed to calculate final RMSE: {str(rse)}") RMSE_values = [] for i in range(len(data_input)): try: # model.param().set('SOC', f'{SOC_rate[i]}') # Set initial SOC value for current dataset data_array = data_input[i]['data'] # Get input data for current dataset if data_array.shape[1] >= 3: # Ensure input data has sufficient columns base_name = os.path.splitext(data_input[i]['filename'])[0] # Get base name directly from data_input if Temp_func_Bool: temp_file_path = os.path.join(INPUT_DIR, f"{base_name}_Temp.txt") # Dynamic temp file path model.component("comp1").func(Temp_func).set("filename", temp_file_path) if Current_func_Bool: current_file_path = os.path.join(INPUT_DIR, f"{base_name}_Current.txt") # Dynamic current file path model.component("comp1").func(Current_func).set("filename", current_file_path) else: # else set C parameter # c_value = base_name.replace("C", "") model.param().set("C", str(C_rate[i])) print(f"Set C parameter to {C_rate[i]} for dataset{data_input[i]['filename']}") model.study("std1").feature("time").set("tlist", Time_STR); # Set time list for current dataset model.sol("sol1").runAll() # Run COMSOL simulation # try: # Extract simulation data from COMSOL sim_data = pymodel.evaluate(Sim_Str) # Evaluate simulation variables sim_Time = sim_data[:, 0]/C_Factor # Time in seconds sim_Ecell = sim_data[:, 1] # Cell voltage data sim_Eneg_10 = sim_data[:, 2] # Negative electrode potential data # except Exception as inner_e: # print(f"Failed to use evaluate method for dataset {C_rate[i]}.xlsx: {str(inner_e)}") # continue else: print(f"Dataset {C_rate[i]}.xlsx does not have sufficient columns for processing") continue exp_Time = data_array[:, 0] # Experimental time data exp_Ecell = data_array[:, int(U_column)] # Experimental cell voltage if Eneg_Bool: exp_Eneg_10 = data_array[:, int(Eneg_column)] # Experimental negative electrode potential t_min = min(max(exp_Time), max(sim_Time)) # Determine interpolation time range t_min_array = np.linspace(0, t_min, 300) # Create uniform time points for interpolation # Create interpolation functions for simulation and experimental data f_sim_Ecell = interpolate.interp1d(sim_Time, sim_Ecell, bounds_error=False, fill_value="extrapolate") f_exp_Ecell = interpolate.interp1d(exp_Time, exp_Ecell, bounds_error=False, fill_value="extrapolate") f_sim_Eneg_10 = interpolate.interp1d(sim_Time, sim_Eneg_10, bounds_error=False, fill_value="extrapolate") # Interpolate data to match time points sim_interp_Ecell = f_sim_Ecell(t_min_array) exp_interp_Ecell = f_exp_Ecell(t_min_array) sim_interp_Eneg_10 = f_sim_Eneg_10(t_min_array) # Calculate RMSE values in millivolts diff_Ecell = (exp_interp_Ecell - sim_interp_Ecell) * 1000 # Convert to mV RMSE_Ecell = np.sqrt(np.mean(diff_Ecell**2)) # Voltage RMSE if Eneg_Bool: f_exp_Eneg_10 = interpolate.interp1d(exp_Time, exp_Eneg_10, bounds_error=False, fill_value="extrapolate") exp_interp_Eneg_10 = f_exp_Eneg_10(t_min_array) diff_Eneg_10 = (exp_interp_Eneg_10 - sim_interp_Eneg_10) * 1000 # Convert to mV RMSE_Eneg_10 = np.sqrt(np.mean(diff_Eneg_10**2)) # Negative potential RMSE else: RMSE_Eneg_10 = 0 # Calculate weighted average of RMSE values if WEIGHT_FACTOR_Cap > 0: RMSE_Cap=np.sqrt((exp_Time[-1]-sim_Time[-1])**2) # Capacity RMSE RMSE_val = RMSE_Ecell * (1-WEIGHT_FACTOR_Eneg-WEIGHT_FACTOR_Cap) + RMSE_Eneg_10 *WEIGHT_FACTOR_Eneg+RMSE_Cap*WEIGHT_FACTOR_Cap RMSE_values.append(RMSE_val) else: RMSE_val = RMSE_Ecell * (1-WEIGHT_FACTOR_Eneg) + RMSE_Eneg_10 *WEIGHT_FACTOR_Eneg RMSE_values.append(RMSE_val) rmse_val_scalar = float(RMSE_val[0]) if isinstance(RMSE_val, np.ndarray) else float(RMSE_val) print(f"RMSE for dataset {data_input[i]['filename']}: {rmse_val_scalar:.4f} mV") # Convert to scalar if necessary for proper printing try: # Create and save plot plt.figure(figsize=(18, 5)) # Cell voltage comparison plot plt.subplot(1, 3, 1) plt.plot(t_min_array, sim_interp_Ecell, 'r-', label='Simulation') plt.plot(t_min_array, exp_interp_Ecell, 'bo', label='Experiment') plt.xlabel('Time (s)') plt.ylabel('Cell Voltage (V)') plt.title('Cell Voltage: Simulation vs Experiment') plt.legend() plt.grid(True) # Negative potential comparison plot plt.subplot(1, 3, 2) plt.plot(t_min_array, sim_interp_Eneg_10*1000, 'r-', label='Simulation') if Eneg_Bool: plt.plot(t_min_array, exp_interp_Eneg_10*1000, 'bo', label='Experiment') plt.xlabel('Time (s)') plt.ylabel('Negative Potential (mV)') plt.title('Negative Potential: Simulation vs Experiment') plt.legend() plt.grid(True) # Error comparison plot plt.subplot(1, 3, 3) plt.plot(t_min_array, diff_Ecell, 'g-', label='Cell Voltage Error') if Eneg_Bool: plt.plot(t_min_array, diff_Eneg_10, 'm-', label='Negative Potential Error') plt.xlabel('Time (s)') plt.ylabel('Error (mV)') # Add RMSE to title if available rmse_text = "N/A" if rmse_val_scalar is None else f"{float(rmse_val_scalar):.2f}" plt.title(f'Errors (RMSE: {rmse_text} mV)') plt.legend() plt.grid(True) plt.tight_layout() # Save plot with standardized path plot_filename = f'{OP}_op_result_comp_dataset_{data_input[i]["filename"]}.png' plot_path = os.path.join(OUTPUT_DIR, plot_filename) plt.savefig(plot_path, dpi=300) plt.show(block=False) print(f"Comparison plot saved as: {plot_path}") except Exception as plot_e: print(f"ERROR: Failed to generate plots for dataset {data_input[i]['filename']}: {str(plot_e)}") except Exception as ei: print(f"ERROR: Failed to process dataset {data_input[i]['filename']}: {str(ei)}") if RMSE_values: # Calculate Optimal Average RMSE avg_rmse = np.mean(RMSE_values) avg_rmse_scalar = float(avg_rmse[0]) if isinstance(avg_rmse, np.ndarray) else float(avg_rmse) print(f"Optimal Average RMSE: {avg_rmse_scalar:.4f} mV") def main() -> int: """Main function to execute parameter optimization with selected algorithm. Returns: int: Exit code (0 for success, non-zero for errors) """ global client, pymodel, model, OP start_time = None exit_code = 0 # Default to success # Validate optimization algorithm type supported_algorithms = ['DE', 'GA', 'PSO'] if OP not in supported_algorithms: print(f"ERROR: Unsupported optimization algorithm: {OP}") print(f"Supported algorithms: {', '.join(supported_algorithms)}") return 1 try: start_time = time.time() # Record start time print(f"===== Python {OP} Parameter Optimization Started =====") # Step I: Load input data with error handling print("\nI. Loading input data...") try: data_result = load_input_data() if not data_result and not data_input: # Check both function return and global variable print("ERROR: No valid input data found") return 1 except Exception as e: print(f"ERROR: Failed to load input data: {str(e)}") return 1 # Step II: Initialize COMSOL model with error handling print("\nII. Initializing COMSOL model...") if not initialize_comsol_model(): print("ERROR: COMSOL model initialization failed") return 2 # Step III & IV: Configure and run selected optimization algorithm print(f"\nIII. Starting {OP} optimization...") best_params, best_rmse, optimizer = run_optimization(calculate_rmse, OP) if optimizer is None: print(f"ERROR: Failed to initialize {OP} optimizer") exit_code = 3 # Step V: Display optimization results with error handling if best_params is not None and best_rmse is not None: print("\nV. Optimization results:") try: best_rmse_val = float(best_rmse[0]) if isinstance(best_rmse, np.ndarray) else float(best_rmse) print(f"Best RMSE value: {best_rmse_val:.6f} mV") print("\nBest parameter values:") for i, param_name in enumerate(list(PARAM_BOUNDS.keys())): try: param_val = float(best_params[i]) print(f"{param_name}: {param_val:.6e}") except IndexError: print(f"WARNING: Missing value for parameter {param_name}") except Exception as e: print(f"WARNING: Failed to process parameter {param_name}: {str(e)}") except Exception as e: print(f"ERROR: Failed to display optimization results: {str(e)}") # Step VI: Plot and save results with error handling if optimizer is not None: print("\nVI. Plotting and saving results...") try: plot_optimization_results(optimizer, OP) except Exception as e: print(f"ERROR: Failed to plot optimization results: {str(e)}") try: save_results(optimizer, OP, best_params, best_rmse) except Exception as e: print(f"ERROR: Failed to save optimization results: {str(e)}") # Step VII: Set optimal parameters and save model with error handling if best_params is not None and pymodel is not None: print("\nVII. Setting optimal parameters and saving model...") try: if set_model_parameters(best_params): original_model_name = MODEL_PATH.split('.')[0] if '.' in MODEL_PATH else MODEL_PATH # Add algorithm type to the optimized model filename optimized_model_path = os.path.join(os.path.dirname(MODEL_PATH), f"{original_model_name}_Op_{OP}.mph") pymodel.save(optimized_model_path) print(f"Optimized model saved as: {optimized_model_path}") else: print("WARNING: Failed to set optimal parameters in model") except Exception as e: print(f"ERROR: Failed to save optimized model: {str(e)}") # Step VIII: Generate comparison plots with optimal parameters if best_params is not None and data_input: print("\nVIII. Running simulation with optimal parameters...") generate_comparison_plots_with_best_params(best_params) # Calculate and display total execution time if start_time: end_time = time.time() print(f"\nTotal optimization time: {(end_time - start_time):.2f} seconds") if exit_code == 0: print(f"\n===== Python {OP} Parameter Optimization Completed Successfully =====") else: print(f"\n===== Python {OP} Parameter Optimization Completed with Errors =====") return exit_code except KeyboardInterrupt: print("\nERROR: Program interrupted by user") return 130 # Standard exit code for keyboard interrupt except Exception as e: print(f"ERROR: Unexpected error during program execution: {str(e)}") import traceback print("Detailed error information:") traceback.print_exc() return 5 # Update the main execution line if __name__ == "__main__": sys.exit(main()) 解析以上代码,给出每个函数的计算逻辑以及变量返回值的传递,重点拆接触如何对comsol中模型设置参数并且运行模型,获取comsol中计算结果与实测值计算RMS误差,调用算法寻优迭代等主要函数。给出具体解析说明,变量传递。
11-26
import os.path as osp from collections import OrderedDict import math import copy import torch import torch.nn as nn from torch.nn import functional as F from torch.cuda.amp import GradScaler, autocast from dassl.engine import TRAINER_REGISTRY, TrainerX from dassl.metrics import compute_accuracy from dassl.utils import load_pretrained_weights, load_checkpoint from dassl.optim import build_optimizer, build_lr_scheduler from clip import clip from clip.simple_tokenizer import SimpleTokenizer as _Tokenizer try: # Prefer relative import to avoid PYTHONPATH issues from .capid_modules import ( CrossAttentionCoupler, DiffusionPromptGenerator, InteractiveGate, save_debug_image, ) except Exception: # fallback to absolute if needed from trainers.capid_modules import ( CrossAttentionCoupler, DiffusionPromptGenerator, InteractiveGate, save_debug_image, ) _tokenizer = _Tokenizer() class CrossAttentivePromptBridge(nn.Module): """Bridge deep text/vision prompts with bi-directional cross-attention. - Projects text (512) and vision (768) prompts to a common dim (default 512). - Runs two multi-head attentions: text<-vision and vision<-text. - Residual fuse with small alpha, then project back to original dims. - Expects lists of tensors per depth: [ (n_ctx, 512) ], [ (n_ctx, 768) ]. """ def __init__(self, dim_text: int = 512, dim_vision: int = 768, dim_common: int = 512, heads: int = 4, dropout: float = 0.0, alpha: float = 0.1): super().__init__() self.txt_to_common = nn.Linear(dim_text, dim_common, bias=False) self.vis_to_common = nn.Linear(dim_vision, dim_common, bias=False) self.common_to_txt = nn.Linear(dim_common, dim_text, bias=False) self.common_to_vis = nn.Linear(dim_common, dim_vision, bias=False) self.attn_tq = nn.MultiheadAttention(dim_common, heads, dropout=dropout, batch_first=True) self.attn_vq = nn.MultiheadAttention(dim_common, heads, dropout=dropout, batch_first=True) self.alpha = alpha def forward(self, deep_txt_list, deep_vis_list, alpha: float = None): if alpha is None: alpha = self.alpha alpha = float(max(0.0, min(1.0, alpha))) # Stack to (L, n_ctx, C) txt = torch.stack(deep_txt_list, dim=0) # (L, n_ctx, 512) vis = torch.stack(deep_vis_list, dim=0) # (L, n_ctx, 768) L, n_ctx_t, dt = txt.shape L2, n_ctx_v, dv = vis.shape assert L == L2 and n_ctx_t == n_ctx_v, "Text/Vision deep prompts must align in depth and n_ctx" S = L * n_ctx_t txt_seq = txt.reshape(S, dt) vis_seq = vis.reshape(S, dv) t = self.txt_to_common(txt_seq).unsqueeze(0) # (1, S, dc) v = self.vis_to_common(vis_seq).unsqueeze(0) # (1, S, dc) # bi-directional cross-attention t2, _ = self.attn_tq(t, v, v) v2, _ = self.attn_vq(v, t, t) # stabilize and residual blend t2 = F.layer_norm(t2, t2.shape[-1:]) v2 = F.layer_norm(v2, v2.shape[-1:]) t_out = (1.0 - alpha) * t + alpha * t2 v_out = (1.0 - alpha) * v + alpha * v2 # back to original dims and list form t_out = self.common_to_txt(t_out.squeeze(0)).reshape(L, n_ctx_t, dt) v_out = self.common_to_vis(v_out.squeeze(0)).reshape(L, n_ctx_t, dv) out_txt_list = [t_out[i] for i in range(L)] out_vis_list = [v_out[i] for i in range(L)] return out_txt_list, out_vis_list def load_clip_to_cpu(cfg): backbone_name = cfg.MODEL.BACKBONE.NAME url = clip._MODELS[backbone_name] model_path = clip._download(url) try: # loading JIT archive model = torch.jit.load(model_path, map_location="cpu").eval() state_dict = None except RuntimeError: state_dict = torch.load(model_path, map_location="cpu") design_details = {"trainer": 'MaPLe', "vision_depth": 0, "language_depth": 0, "vision_ctx": 0, "language_ctx": 0, "maple_length": cfg.TRAINER.MAPLE.N_CTX} model = clip.build_model(state_dict or model.state_dict(), design_details) return model class TextEncoder(nn.Module): def __init__(self, clip_model): super().__init__() self.transformer = clip_model.transformer self.positional_embedding = clip_model.positional_embedding self.ln_final = clip_model.ln_final self.text_projection = clip_model.text_projection self.dtype = clip_model.dtype def forward(self, prompts, tokenized_prompts, compound_prompts_deeper_text): x = prompts + self.positional_embedding.type(self.dtype) x = x.permute(1, 0, 2) # NLD -> LND # Pass as the list, as nn.sequential cannot process multiple arguments in the forward pass combined = [x, compound_prompts_deeper_text, 0] # third argument is the counter which denotes depth of prompt outputs = self.transformer(combined) x = outputs[0] # extract the x back from here x = x.permute(1, 0, 2) # LND -> NLD x = self.ln_final(x).type(self.dtype) # x.shape = [batch_size, n_ctx, transformer.width] # take features from the eot embedding (eot_token is the highest number in each sequence) x = x[torch.arange(x.shape[0]), tokenized_prompts.argmax(dim=-1)] @ self.text_projection return x class MultiModalPromptLearner(nn.Module): def __init__(self, cfg, classnames, clip_model): super().__init__() n_cls = len(classnames) n_ctx = cfg.TRAINER.MAPLE.N_CTX ctx_init = cfg.TRAINER.MAPLE.CTX_INIT dtype = clip_model.dtype ctx_dim = clip_model.ln_final.weight.shape[0] clip_imsize = clip_model.visual.input_resolution cfg_imsize = cfg.INPUT.SIZE[0] # Default is 1, which is compound shallow prompting assert cfg.TRAINER.MAPLE.PROMPT_DEPTH >= 1, "For MaPLe, PROMPT_DEPTH should be >= 1" self.compound_prompts_depth = cfg.TRAINER.MAPLE.PROMPT_DEPTH # max=12, but will create 11 such shared prompts assert cfg_imsize == clip_imsize, f"cfg_imsize ({cfg_imsize}) must equal to clip_imsize ({clip_imsize})" if ctx_init and (n_ctx) <= 4: # use given words to initialize context vectors ctx_init = ctx_init.replace("_", " ") n_ctx = n_ctx prompt = clip.tokenize(ctx_init) with torch.no_grad(): embedding = clip_model.token_embedding(prompt).type(dtype) ctx_vectors = embedding[0, 1: 1 + n_ctx, :] prompt_prefix = ctx_init else: # random initialization ctx_vectors = torch.empty(n_ctx, ctx_dim, dtype=dtype) nn.init.normal_(ctx_vectors, std=0.02) prompt_prefix = " ".join(["X"] * n_ctx) print('MaPLe design: Multi-modal Prompt Learning') print(f'Initial context: "{prompt_prefix}"') print(f"Number of MaPLe context words (tokens): {n_ctx}") # These below, related to the shallow prompts # Linear layer so that the tokens will project to 512 and will be initialized from 768 self.proj = nn.Linear(ctx_dim, 768) self.proj.half() self.ctx = nn.Parameter(ctx_vectors) # These below parameters related to the shared prompts # Define the compound prompts for the deeper layers # Minimum can be 1, which defaults to shallow MaPLe # compound prompts self.compound_prompts_text = nn.ParameterList([nn.Parameter(torch.empty(n_ctx, 512)) for _ in range(self.compound_prompts_depth - 1)]) for single_para in self.compound_prompts_text: nn.init.normal_(single_para, std=0.02) # Also make corresponding projection layers, for each prompt single_layer = nn.Linear(ctx_dim, 768) self.compound_prompt_projections = _get_clones(single_layer, self.compound_prompts_depth - 1) classnames = [name.replace("_", " ") for name in classnames] name_lens = [len(_tokenizer.encode(name)) for name in classnames] prompts = [prompt_prefix + " " + name + "." for name in classnames] tokenized_prompts = torch.cat([clip.tokenize(p) for p in prompts]) # (n_cls, n_tkn) with torch.no_grad(): embedding = clip_model.token_embedding(tokenized_prompts).type(dtype) # These token vectors will be saved when in save_model(), # but they should be ignored in load_model() as we want to use # those computed using the current class names self.register_buffer("token_prefix", embedding[:, :1, :]) # SOS self.register_buffer("token_suffix", embedding[:, 1 + n_ctx:, :]) # CLS, EOS self.n_cls = n_cls self.n_ctx = n_ctx self.tokenized_prompts = tokenized_prompts # torch.Tensor self.name_lens = name_lens # --- Optional CAPID modules integrated at the prompt learner level --- self._clip_model_ref = clip_model capid = getattr(cfg.TRAINER, "CAPID", None) self.capid_enabled = bool(getattr(capid, "ENABLE", False)) if capid is not None else False if self.capid_enabled: self.ca_enabled = bool(getattr(capid, "CA_ENABLE", False)) self.diff_enabled = bool(getattr(capid, "DIFF_ENABLE", False)) self.gate_enabled = bool(getattr(capid, "GATE_ENABLE", False)) # Conservative safety knobs (with robust defaults) self.ca_alpha = float(getattr(capid, "CA_ALPHA", 0.1)) # residual blend factor for CA self.diff_scale = float(getattr(capid, "DIFF_SCALE", 0.05)) # residual scale for DIFF self.gate_max = float(getattr(capid, "GATE_MAX", 0.5)) # clamp gate strength upper bound # CA mode: 'bridge' (default) couples deep prompts in CustomCLIP; 'shallow' applies here self.ca_mode = str(getattr(capid, "CA_MODE", "bridge")).lower() self.ca_shallow = bool(getattr(capid, "CA_SHALLOW", False)) if self.ca_enabled: self.ca = CrossAttentionCoupler( dim_text=512, dim_vision=768, depth=int(getattr(capid, "CA_DEPTH", 1)), heads=int(getattr(capid, "CA_HEADS", 4)), dropout=float(getattr(capid, "CA_DROPOUT", 0.0)), ) if self.diff_enabled: self.diff_text = DiffusionPromptGenerator(channels=512, cond_channels=512) self.diff_vision = DiffusionPromptGenerator(channels=768, cond_channels=768) self.diff_steps = int(getattr(capid, "DIFF_STEPS", 2)) self.diff_noise = float(getattr(capid, "DIFF_NOISE_STD", 0.1)) self.cfg_scale = float(getattr(capid, "CFG_SCALE", 1.0)) if self.gate_enabled: self.gate = InteractiveGate( alpha=float(getattr(capid, "GATE_ALPHA", 1.0)), beta=float(getattr(capid, "GATE_BETA", 0.0)), ) # Instruction text for gating (optional) self.instruction_text = str(getattr(capid, "INSTRUCTION", "")) # Debugging self.debug_save = bool(getattr(capid, "DEBUG_SAVE", False)) self.debug_freq = int(getattr(capid, "DEBUG_FREQ", 200)) self.debug_dir = str(getattr(capid, "DEBUG_DIR", "output/capid_debug")) self._debug_step = 0 self.capid_applied = False @torch.no_grad() def _encode_instruction(self, text: str): if text is None or len(text.strip()) == 0: return None try: tokens = clip.tokenize([text]) # (1, L) cm = self._clip_model_ref emb = cm.token_embedding(tokens.to(cm.text_projection.device).type(cm.dtype)) x = emb + cm.positional_embedding.type(cm.dtype) x = x.permute(1, 0, 2) x = cm.transformer(x) x = x.permute(1, 0, 2) x = cm.ln_final(x).type(cm.dtype) feat = x[torch.arange(x.shape[0]), tokens.argmax(dim=-1).to(x.device)] @ cm.text_projection return feat except Exception: return None def construct_prompts(self, ctx, prefix, suffix, label=None): # dim0 is either batch_size (during training) or n_cls (during testing) # ctx: context tokens, with shape of (dim0, n_ctx, ctx_dim) # prefix: the sos token, with shape of (n_cls, 1, ctx_dim) # suffix: remaining tokens, with shape of (n_cls, *, ctx_dim) if label is not None: prefix = prefix[label] suffix = suffix[label] prompts = torch.cat( [ prefix, # (dim0, 1, dim) ctx, # (dim0, n_ctx, dim) suffix, # (dim0, *, dim) ], dim=1, ) return prompts def forward(self): ctx = self.ctx if ctx.dim() == 2: ctx = ctx.unsqueeze(0).expand(self.n_cls, -1, -1) prefix = self.token_prefix suffix = self.token_suffix prompts = self.construct_prompts(ctx, prefix, suffix) # Before returning, need to transform # prompts to 768 for the visual side visual_deep_prompts = [] for index, layer in enumerate(self.compound_prompt_projections): visual_deep_prompts.append(layer(self.compound_prompts_text[index])) # CAPID optional coupling/generation inside prompt learner # Align projection dtype with context dtype to avoid Half/Float mismatch after loading checkpoints if hasattr(self.proj, "weight") and self.proj.weight.dtype != self.ctx.dtype: self.proj.to(self.ctx.dtype) shared_ctx = self.proj(self.ctx) # (n_ctx, 768) if getattr(self, "capid_enabled", False): # Expand to per-class vision tokens vis_tokens = shared_ctx.unsqueeze(0).expand(self.n_cls, -1, -1) gate_strength = 1.0 if getattr(self, "gate_enabled", False): inst_feat = self._encode_instruction(self.instruction_text) # Use a lightweight text summary by averaging prompt tokens try: txt_feat = prompts.mean(dim=1) # (n_cls, 512) except Exception: txt_feat = None g_tensor = self.gate(inst_feat, txt_feat, None) gate_strength = max(0.0, min(self.gate_max, float(g_tensor.item()))) # Safe DIFF: only apply when truly non-zero effect should_diff = ( getattr(self, "diff_enabled", False) and (getattr(self, "cfg_scale", 0.0) > 0.0) and (getattr(self, "diff_noise", 0.0) > 0.0) and (getattr(self, "diff_steps", 0) > 0) ) cond_txt_pl = prompts.mean(dim=1) # (n_cls, 512) cond_vis_pl = vis_tokens.mean(dim=1) # (n_cls, 768) delta_txt = self.diff_text.sample(prompts, cond=cond_txt_pl, steps=self.diff_steps, noise_std=self.diff_noise) delta_txt = F.layer_norm(delta_txt, delta_txt.shape[-1:]) prompts = prompts + self.diff_scale * self.cfg_scale * gate_strength * delta_txt delta_vis = self.diff_vision.sample(vis_tokens, cond=cond_vis_pl, steps=self.diff_steps, noise_std=self.diff_noise) delta_vis = F.layer_norm(delta_vis, delta_vis.shape[-1:]) vis_tokens = vis_tokens + self.diff_scale * self.cfg_scale * gate_strength * delta_vis attn_maps = None # Only apply shallow CA here when explicitly enabled if getattr(self, "ca_enabled", False) and getattr(self, "ca_shallow", False) and getattr(self, "ca_mode", "bridge") != "bridge": # Residual CA with small alpha p_in, v_in = prompts, vis_tokens p_ca, v_ca, attn_maps = self.ca(p_in, v_in) p_ca = F.layer_norm(p_ca, p_ca.shape[-1:]) v_ca = F.layer_norm(v_ca, v_ca.shape[-1:]) alpha = max(0.0, min(1.0, float(self.ca_alpha))) prompts = (1.0 - alpha) * p_in + alpha * p_ca vis_tokens = (1.0 - alpha) * v_in + alpha * v_ca shared_ctx = vis_tokens.mean(dim=0) # Debug saves if getattr(self, "debug_save", False): self._debug_step += 1 if self._debug_step % max(1, self.debug_freq) == 0: try: if attn_maps is not None and len(attn_maps) > 0: a = attn_maps[0][0] # Robust: handle 4D (B,H,Lq,Lk) and 3D (B,Lq,Lk) if a.dim() == 4: a_vis = a.mean(dim=1)[0] elif a.dim() == 3: a_vis = a[0] else: a_vis = a.flatten(1).unsqueeze(0) out_path = osp.join(self.debug_dir, f"pl_attn_layer0_{self._debug_step:06d}.png") save_debug_image(a_vis, out_path) if getattr(self, "diff_enabled", False): try: dt = (delta_txt[0].norm(dim=-1, keepdim=False)) dv = (delta_vis[0].norm(dim=-1, keepdim=False)) save_debug_image(dt.unsqueeze(0), osp.join(self.debug_dir, f"pl_delta_txt_norm_{self._debug_step:06d}.png")) save_debug_image(dv.unsqueeze(0), osp.join(self.debug_dir, f"pl_delta_vis_norm_{self._debug_step:06d}.png")) except Exception: pass except Exception: pass self.capid_applied = True # Now the other way around; return original as for visual 768 is required return prompts, shared_ctx, self.compound_prompts_text, visual_deep_prompts class CustomCLIP(nn.Module): def __init__(self, cfg, classnames, clip_model): super().__init__() self.cfg = cfg self.prompt_learner = MultiModalPromptLearner(cfg, classnames, clip_model) self.tokenized_prompts = self.prompt_learner.tokenized_prompts self.image_encoder = clip_model.visual self.text_encoder = TextEncoder(clip_model) self.logit_scale = clip_model.logit_scale self.dtype = clip_model.dtype # Keep a lightweight reference for encoding free-form instructions self._clip_model_ref = clip_model # CAPID modules (optional) capid = cfg.TRAINER.CAPID self.capid_enabled = bool(getattr(capid, "ENABLE", False)) if self.capid_enabled: self.ca_enabled = bool(getattr(capid, "CA_ENABLE", False)) self.diff_enabled = bool(getattr(capid, "DIFF_ENABLE", False)) self.gate_enabled = bool(getattr(capid, "GATE_ENABLE", False)) self.diff_loss_weight = float(getattr(capid, "DIFF_LOSS_WEIGHT", 0.1)) # Conservative safety knobs (mirror prompt learner) self.ca_alpha = float(getattr(capid, "CA_ALPHA", 0.1)) self.diff_scale = float(getattr(capid, "DIFF_SCALE", 0.05)) self.gate_max = float(getattr(capid, "GATE_MAX", 0.5)) self.ca_mode = str(getattr(capid, "CA_MODE", "bridge")).lower() if self.ca_enabled: self.ca = CrossAttentionCoupler( dim_text=512, dim_vision=768, depth=int(getattr(capid, "CA_DEPTH", 1)), heads=int(getattr(capid, "CA_HEADS", 4)), dropout=float(getattr(capid, "CA_DROPOUT", 0.0)), ) # Bridge module for deep compound prompts (text 512 <-> vision 768) if self.ca_mode == "bridge": self.ca_bridge = CrossAttentivePromptBridge( dim_text=512, dim_vision=768, dim_common=512, heads=int(getattr(capid, "CA_HEADS", 4)), dropout=float(getattr(capid, "CA_DROPOUT", 0.0)), alpha=float(getattr(capid, "CA_ALPHA", 0.1)), ) if self.diff_enabled: self.diff_text = DiffusionPromptGenerator(channels=512, cond_channels=512) self.diff_vision = DiffusionPromptGenerator(channels=768, cond_channels=768) self.diff_steps = int(getattr(capid, "DIFF_STEPS", 2)) self.diff_noise = float(getattr(capid, "DIFF_NOISE_STD", 0.1)) self.cfg_scale = float(getattr(capid, "CFG_SCALE", 1.0)) if self.gate_enabled: self.gate = InteractiveGate( alpha=float(getattr(capid, "GATE_ALPHA", 1.0)), beta=float(getattr(capid, "GATE_BETA", 0.0)), ) # Debug state self.debug_save = bool(getattr(capid, "DEBUG_SAVE", False)) self.debug_freq = int(getattr(capid, "DEBUG_FREQ", 200)) self.debug_dir = str(getattr(capid, "DEBUG_DIR", "output/capid_debug")) self._debug_step = 0 @torch.no_grad() def _encode_instruction(self, text: str): if text is None or len(text.strip()) == 0: return None # Lightweight proxy: mean-pooled token embeddings (no transformer), dtype/device-safe try: tokens = clip.tokenize([text]) # (1, L) emb = self._clip_model_ref.token_embedding(tokens.to(self.logit_scale.device).type(self.dtype)) # (1,L,512) feat = emb.mean(dim=1) # (1,512) return feat except Exception: return None def forward(self, image, label=None): tokenized_prompts = self.tokenized_prompts logit_scale = self.logit_scale.exp() prompts, shared_ctx, deep_compound_prompts_text, deep_compound_prompts_vision = self.prompt_learner() # Bridge deep prompts before encoders if enabled if getattr(self, "capid_enabled", False) and getattr(self, "ca_enabled", False) and getattr(self, "ca_mode", "bridge") == "bridge": try: deep_compound_prompts_text, deep_compound_prompts_vision = self.ca_bridge( deep_compound_prompts_text, deep_compound_prompts_vision, alpha=float(getattr(self, "ca_alpha", 0.1)), ) except Exception: # Fallback: keep original if any shape issue pass # CAPID optional pipeline if getattr(self, "capid_enabled", False) and not getattr(self.prompt_learner, "capid_applied", False): # Prepare per-class vision tokens from shared_ctx for coupling/diffusion # shared_ctx: (n_ctx, 768) -> (n_cls, n_ctx, 768) vis_tokens = shared_ctx.unsqueeze(0).expand(self.prompt_learner.n_cls, -1, -1) gate_strength = 1.0 if getattr(self, "gate_enabled", False): instruction = getattr(self.cfg.TRAINER.CAPID, "INSTRUCTION", "") inst_feat = self._encode_instruction(instruction) # Use text-only gating by default to avoid extra compute; img_feat kept None # Compute a quick baseline text feature from current prompts (detached) try: txt_feat_base = self.text_encoder(prompts, tokenized_prompts, deep_compound_prompts_text).detach() except Exception: txt_feat_base = None g_tensor = self.gate(inst_feat, txt_feat_base, None) gate_strength = max(0.0, min(self.gate_max, float(g_tensor.item()))) # Safe DIFF should_diff = ( getattr(self, "diff_enabled", False) and (getattr(self, "cfg_scale", 0.0) > 0.0) and (getattr(self, "diff_noise", 0.0) > 0.0) and (getattr(self, "diff_steps", 0) > 0) ) cond_txt = prompts.mean(dim=1) # (n_cls, 512) cond_vis = vis_tokens.mean(dim=1) # (n_cls, 768) delta_txt = self.diff_text.sample(prompts, cond=cond_txt, steps=self.diff_steps, noise_std=self.diff_noise) delta_txt = F.layer_norm(delta_txt, delta_txt.shape[-1:]) prompts = prompts + self.diff_scale * self.cfg_scale * gate_strength * delta_txt delta_vis = self.diff_vision.sample(vis_tokens, cond=cond_vis, steps=self.diff_steps, noise_std=self.diff_noise) delta_vis = F.layer_norm(delta_vis, delta_vis.shape[-1:]) vis_tokens = vis_tokens + self.diff_scale * self.cfg_scale * gate_strength * delta_vis attn_maps = None # If using bridge mode, skip shallow CA here if getattr(self, "ca_enabled", False) and getattr(self, "ca_mode", "bridge") != "bridge": p_in, v_in = prompts, vis_tokens p_ca, v_ca, attn_maps = self.ca(p_in, v_in) p_ca = F.layer_norm(p_ca, p_ca.shape[-1:]) v_ca = F.layer_norm(v_ca, v_ca.shape[-1:]) alpha = max(0.0, min(1.0, float(self.ca_alpha))) prompts = (1.0 - alpha) * p_in + alpha * p_ca vis_tokens = (1.0 - alpha) * v_in + alpha * v_ca # Reduce back to shared_ctx shape expected by vision encoder shared_ctx = vis_tokens.mean(dim=0) # (n_ctx, 768) # Debug saves (very lightweight) if getattr(self, "debug_save", False): self._debug_step += 1 if self._debug_step % max(1, self.debug_freq) == 0: try: if attn_maps is not None and len(attn_maps) > 0: a = attn_maps[0][0] if a.dim() == 4: a_vis = a.mean(dim=1)[0] elif a.dim() == 3: a_vis = a[0] else: a_vis = a.flatten(1).unsqueeze(0) out_path = osp.join(self.debug_dir, f"attn_layer0_{self._debug_step:06d}.png") save_debug_image(a_vis, out_path) if getattr(self, "diff_enabled", False): # Save magnitude heatmaps for first class' deltas try: dt = (delta_txt[0].norm(dim=-1, keepdim=False)) # (L_text,) dv = (delta_vis[0].norm(dim=-1, keepdim=False)) # (L_vis,) # Expand to 2D for visualization save_debug_image(dt.unsqueeze(0), osp.join(self.debug_dir, f"delta_txt_norm_{self._debug_step:06d}.png")) save_debug_image(dv.unsqueeze(0), osp.join(self.debug_dir, f"delta_vis_norm_{self._debug_step:06d}.png")) except Exception: pass except Exception: pass text_features = self.text_encoder(prompts, tokenized_prompts, deep_compound_prompts_text) image_features = self.image_encoder(image.type(self.dtype), shared_ctx, deep_compound_prompts_vision) image_features = image_features / image_features.norm(dim=-1, keepdim=True) text_features = text_features / text_features.norm(dim=-1, keepdim=True) logits = logit_scale * image_features @ text_features.t() if self.prompt_learner.training: loss = F.cross_entropy(logits, label) if getattr(self, "capid_enabled", False) and getattr(self, "diff_enabled", False) and (getattr(self, "diff_loss_weight", 0.0) > 0): n_cls = self.prompt_learner.n_cls vis_tokens = shared_ctx.unsqueeze(0).expand(n_cls, -1, -1) # (n_cls, n_ctx, 768) # 条件:文本用 prompts 的 token 平均;视觉用 shared_ctx 的 token 平均 cond_txt = prompts.mean(dim=1) # (n_cls, 512) cond_vis = shared_ctx.mean(dim=0, keepdim=True).expand(n_cls, -1) # (n_cls, 768) try: l_txt = self.diff_text.diffusion_loss(prompts, cond_txt, noise_std=float(getattr(self, "diff_noise", 0.1))) except Exception: l_txt = torch.tensor(0.0, device=loss.device, dtype=loss.dtype) try: l_vis = self.diff_vision.diffusion_loss(vis_tokens, cond_vis, noise_std=float(getattr(self, "diff_noise", 0.1))) except Exception: l_vis = torch.tensor(0.0, device=loss.device, dtype=loss.dtype) loss = loss + self.diff_loss_weight * (l_txt + l_vis) return loss return logits def _get_clones(module, N): return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) @TRAINER_REGISTRY.register() class MaPLe(TrainerX): def check_cfg(self, cfg): assert cfg.TRAINER.MAPLE.PREC in ["fp16", "fp32", "amp"] def build_model(self): cfg = self.cfg classnames = self.dm.dataset.classnames print(f"Loading CLIP (backbone: {cfg.MODEL.BACKBONE.NAME})") clip_model = load_clip_to_cpu(cfg) if cfg.TRAINER.MAPLE.PREC == "fp32" or cfg.TRAINER.MAPLE.PREC == "amp": # CLIP's default precision is fp16 clip_model.float() print("Building custom CLIP") self.model = CustomCLIP(cfg, classnames, clip_model) print("Turning off gradients in both the image and the text encoder") # Default: only update prompt_learner (MaPLe). If CAPID enabled, also allow # bridge/CA/DIFF/Gate small modules to learn. capid_cfg = getattr(self.cfg.TRAINER, "CAPID", None) capid_on = bool(getattr(capid_cfg, "ENABLE", False)) if capid_cfg is not None else False capid_train_only = bool(getattr(capid_cfg, "TRAIN_ONLY_CAPID", False)) if capid_cfg is not None else False # Freeze CLIP backbone under _clip_model_ref; only train open-track prompt subset + CAPID small modules for name, param in self.model.named_parameters(): # hard block CLIP backbone if "prompt_learner._clip_model_ref" in name: param.requires_grad_(False) continue if capid_on and capid_train_only: # train only CAPID modules allow = ( ("ca_bridge" in name) or (".ca." in name or name.endswith(".ca")) or ("diff_text" in name) or ("diff_vision" in name) or ("gate" in name) ) else: # open-track prompt subset + CAPID modules (+VPT) allow = ( ( name.startswith("prompt_learner.ctx") or name.startswith("prompt_learner.proj") or name.startswith("prompt_learner.compound_prompts_text.0") or name.startswith("prompt_learner.compound_prompt_projections.0") ) or (capid_on and ( ("ca_bridge" in name) or (".ca." in name or name.endswith(".ca")) or ("diff_text" in name) or ("diff_vision" in name) or ("gate" in name) )) or ("VPT" in name) ) param.requires_grad_(bool(allow)) # Double check enabled = set() for name, param in self.model.named_parameters(): if param.requires_grad: enabled.add(name) print(f"Parameters to be updated: {enabled}") if cfg.MODEL.INIT_WEIGHTS: load_pretrained_weights(self.model, cfg.MODEL.INIT_WEIGHTS) self.model.to(self.device) # NOTE: only give prompt_learner to the optimizer self.optim = build_optimizer(self.model, cfg.OPTIM) self.sched = build_lr_scheduler(self.optim, cfg.OPTIM) self.register_model("MultiModalPromptLearner", self.model, self.optim, self.sched) self.scaler = GradScaler() if cfg.TRAINER.MAPLE.PREC == "amp" else None # Note that multi-gpu training could be slow because CLIP's size is # big, which slows down the copy operation in DataParallel device_count = torch.cuda.device_count() if device_count > 1: print(f"Multiple GPUs detected (n_gpus={device_count}), use all of them!") self.model = nn.DataParallel(self.model) def forward_backward(self, batch): image, label = self.parse_batch_train(batch) model = self.model optim = self.optim scaler = self.scaler prec = self.cfg.TRAINER.MAPLE.PREC if prec == "amp": with autocast(): loss = model(image, label) optim.zero_grad() scaler.scale(loss).backward() scaler.step(optim) scaler.update() else: loss = model(image, label) optim.zero_grad() loss.backward() optim.step() loss_summary = {"loss": loss.item()} if (self.batch_idx + 1) == self.num_batches: self.update_lr() return loss_summary def parse_batch_train(self, batch): input = batch["img"] label = batch["label"] input = input.to(self.device) label = label.to(self.device) return input, label def load_model(self, directory, epoch=None): if not directory: print("Note that load_model() is skipped as no pretrained model is given") return names = self.get_model_names() # By default, the best model is loaded model_file = "model-best.pth.tar" if epoch is not None: model_file = "model.pth.tar-" + str(epoch) for name in names: model_path = osp.join(directory, name, model_file) if not osp.exists(model_path): raise FileNotFoundError('Model not found at "{}"'.format(model_path)) checkpoint = load_checkpoint(model_path) state_dict = checkpoint["state_dict"] epoch = checkpoint["epoch"] # Ignore fixed token vectors if "prompt_learner.token_prefix" in state_dict: del state_dict["prompt_learner.token_prefix"] if "prompt_learner.token_suffix" in state_dict: del state_dict["prompt_learner.token_suffix"] print("Loading weights to {} " 'from "{}" (epoch = {})'.format(name, model_path, epoch)) # set strict=False self._models[name].load_state_dict(state_dict, strict=False) 要求: 在类 MultiModalPromptLearner.forward() 中,找到 if should_diff and (not self.training): 下面的两行采样: delta_txt = self.diff_text.sample(prompts, cond=None, steps=self.diff_steps, noise_std=self.diff_noise) delta_vis = self.diff_vision.sample(vis_tokens, cond=None, steps=self.diff_steps, noise_std=self.diff_noise) 将它们改为: cond_txt_pl = prompts.mean(dim=1) # (n_cls, 512) cond_vis_pl = vis_tokens.mean(dim=1) # (n_cls, 768) delta_txt = self.diff_text.sample(prompts, cond=cond_txt_pl, steps=self.diff_steps, noise_std=self.diff_noise) delta_vis = self.diff_vision.sample(vis_tokens, cond=cond_vis_pl, steps=self.diff_steps, noise_std=self.diff_noise) 2) 同一文件,在类 CustomCLIP.forward() 中,另一处 if should_diff and (not self.training): 下面的两行采样: delta_txt = self.diff_text.sample(prompts, cond=None, steps=self.diff_steps, noise_std=self.diff_noise) delta_vis = self.diff_vision.sample(vis_tokens, cond=None, steps=self.diff_steps, noise_std=self.diff_noise) 将它们改为: cond_txt = prompts.mean(dim=1) # (n_cls, 512) cond_vis = vis_tokens.mean(dim=1) # (n_cls, 768) delta_txt = self.diff_text.sample(prompts, cond=cond_txt, steps=self.diff_steps, noise_std=self.diff_noise) delta_vis = self.diff_vision.sample(vis_tokens, cond=cond_vis, steps=self.diff_steps, noise_std=self.diff_noise) 改完后发我修改后的完整代码
11-13
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值