C. New Year and Permutation

本文探讨了在给定长度和模数的情况下,计算所有可能排列的幸福总和的方法。幸福定义为排列中满足特定条件的子段数量,涉及到数学组合、算法优化和模运算。

链接:https://codeforces.ml/contest/1284/problem/C

Recall that the permutation is an array consisting of nn distinct integers from 11 to nn in arbitrary order. For example, [2,3,1,5,4][2,3,1,5,4] is a permutation, but [1,2,2][1,2,2] is not a permutation (22 appears twice in the array) and [1,3,4][1,3,4] is also not a permutation (n=3n=3 but there is 44 in the array).

A sequence aa is a subsegment of a sequence bb if aa can be obtained from bb by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. We will denote the subsegments as [l,r][l,r], where l,rl,r are two integers with 1≤l≤r≤n1≤l≤r≤n. This indicates the subsegment where l−1l−1 elements from the beginning and n−rn−r elements from the end are deleted from the sequence.

For a permutation p1,p2,…,pnp1,p2,…,pn, we define a framed segment as a subsegment [l,r][l,r] where max{pl,pl+1,…,pr}−min{pl,pl+1,…,pr}=r−lmax{pl,pl+1,…,pr}−min{pl,pl+1,…,pr}=r−l. For example, for the permutation (6,7,1,8,5,3,2,4)(6,7,1,8,5,3,2,4) some of its framed segments are: [1,2],[5,8],[6,7],[3,3],[8,8][1,2],[5,8],[6,7],[3,3],[8,8]. In particular, a subsegment [i,i][i,i] is always a framed segments for any ii between 11 and nn, inclusive.

We define the happiness of a permutation pp as the number of pairs (l,r)(l,r) such that 1≤l≤r≤n1≤l≤r≤n, and [l,r][l,r] is a framed segment. For example, the permutation [3,1,2][3,1,2] has happiness 55: all segments except [1,2][1,2] are framed segments.

Given integers nn and mm, Jongwon wants to compute the sum of happiness for all permutations of length nn, modulo the prime number mm. Note that there exist n!n! (factorial of nn) different permutations of length nn.

Input

The only line contains two integers nn and mm (1≤n≤2500001≤n≤250000, 108≤m≤109108≤m≤109, mm is prime).

Output

Print rr (0≤r<m0≤r<m), the sum of happiness for all permutations of length nn, modulo a prime number mm.

Examples

input

Copy

1 993244853

output

Copy

1

input

Copy

2 993244853

output

Copy

6

input

Copy

3 993244853

output

Copy

32

input

Copy

2019 993244853

output

Copy

923958830

input

Copy

2020 437122297

output

Copy

265955509

Note

For sample input n=3n=3, let's consider all permutations of length 33:

  • [1,2,3][1,2,3], all subsegments are framed segment. Happiness is 66.
  • [1,3,2][1,3,2], all subsegments except [1,2][1,2] are framed segment. Happiness is 55.
  • [2,1,3][2,1,3], all subsegments except [2,3][2,3] are framed segment. Happiness is 55.
  • [2,3,1][2,3,1], all subsegments except [2,3][2,3] are framed segment. Happiness is 55.
  • [3,1,2][3,1,2], all subsegments except [1,2][1,2] are framed segment. Happiness is 55.
  • [3,2,1][3,2,1], all subsegments are framed segment. Happiness is 66.

Thus, the sum of happiness is 6+5+5+5+5+6=326+5+5+5+5+6=32.

代码:

#include<bits/stdc++.h>
using namespace std;
long long n,mod,t,d,s,k;
long long l[1000001];
long long a[1000001];
long long f[1000001],e[1000001];
long long flag[1000001];
int main()
{
	cin>>n>>mod;
	a[0]=1;
	for(int i=1;i<=n;i++)
	{
		a[i]=a[i-1]*i%mod;
	}
	s=0;
	for(int i=1;i<=n;i++)
	{
		s+=(n-i+1)%mod*(a[i]*a[n-i+1]%mod);
		s%=mod;
	}
	cout<<s<<endl;
}

 

import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from statsmodels.stats.outliers_influence import variance_inflation_factor from statsmodels.tools.tools import add_constant from sklearn.model_selection import train_test_split from sklearn.inspection import permutation_importance from xgboost import XGBRegressor # 引入 XGBoost import warnings import os warnings.filterwarnings('ignore') # ---------------------------------------------------------------------- # 1. 定义数据路径、目标变量和所有特征列表 (保持不变) # ---------------------------------------------------------------------- file_paths = { 'S1_Data': r"E:\01毕业论文\result\CQ_GEDI_S1_10m_v3_RTC.csv", 'S2_Data': r"E:\01毕业论文\result\CQ_GEDI_S2_10m_Year_v2.csv", 'Climate_Data': r"E:\01毕业论文\result\CQ_GEDI_Climate_2024.csv", 'Canopy_Data': r"E:\01毕业论文\result\GEDI_Canopy_2020.csv" } target_variable = 'agbd' merge_keys = ['.geo'] s1_features = [ 'VV', 'VH', 'VV_mean', 'VV_variance', 'VV_homogeneity', 'VV_contrast', 'VV_dissimilarity', 'VV_entropy', 'VV_asm', 'VV_correlation', 'VH_mean', 'VH_variance', 'VH_homogeneity', 'VH_contrast', 'VH_dissimilarity', 'VH_entropy', 'VH_asm', 'VH_correlation', 'VH_plus_VV', 'VH_minus_VV', 'VH_mul_VV', 'VH_div_VV' ] s2_features = [ 'B2' , 'B3', 'B4', 'B5' , 'B6', 'B7', 'B8', 'B8A', 'B11', 'B12', 'ARVI','DVI','EVI','GNDVI','GRVI','IRECI','LAI','MTCI','NDVI','REIP','RVI','SAVI', 'ASP' ,'ELE','SLO', 'TCB', 'TCG', 'TCW', 'GLCM_ASM' , 'GLCM_Contrast','GLCM_Correlation','GLCM_Dissimilarity', 'GLCM_Entropy','GLCM_Homogeneity', 'GLCM_Mean' ,'GLCM_Variance' ] climate_features = [ 'PET_mean_mm', 'Pr_mean_mm', 'Pr_stdev_mm', 'Soil_Water_mean_mm', 'T_mean_C', 'T_range_C', 'VPD_mean_8_hPa' ] canopy_features = ['h_canopy'] all_features_initial = s1_features + s2_features + climate_features + canopy_features # ---------------------------------------------------------------------- # 2. 数据加载、合并与清洗 (与原脚本保持一致) # ---------------------------------------------------------------------- def load_and_merge_data(paths, target_col, features_list, keys, canopy_features): """加载所有CSV文件并按指定键合并。""" dataframes = {} base_df_name = 'Canopy_Data' # 简化加载,假设路径检查已在之前完成 for name, path in paths.items(): if not os.path.exists(path): return None, [] # 停止 df = pd.read_csv(path) dataframes[name] = df if base_df_name not in dataframes or target_col not in dataframes[base_df_name].columns: return None, [] base_df = dataframes[base_df_name] existing_canopy_features = [f for f in canopy_features if f in base_df.columns] features_list = [f for f in features_list if f not in [c for c in canopy_features if c not in existing_canopy_features]] try: initial_cols = keys + [target_col] + existing_canopy_features df_combined = base_df[initial_cols].copy() except KeyError: return None, [] for name, df in dataframes.items(): if name == base_df_name: continue current_features = [f for f in features_list if f in df.columns and f not in df_combined.columns] if not current_features: continue missing_merge_keys = [k for k in keys if k not in df.columns] if missing_merge_keys: return None, [] cols_to_merge = keys + current_features df_temp = df[cols_to_merge].copy() df_combined = pd.merge( df_combined, df_temp, on=keys, how='inner', suffixes=('', f'_{name}_dup') ) final_features_in_df = [f for f in features_list if f in df_combined.columns] return df_combined, final_features_in_df # 执行数据加载和合并 print("\n正在加载和合并数据...") df_combined, final_features_list = load_and_merge_data( file_paths, target_variable, all_features_initial, merge_keys, canopy_features ) if df_combined is None: print("由于数据加载或合并失败,分析已终止。") exit() df_analysis = df_combined[[target_variable] + final_features_list].copy() df_cleaned = df_analysis.dropna() if len(df_cleaned) < 100: print("错误:清洗后数据点不足以进行模型训练。") exit() # ---------------------------------------------------------------------- # 3. 对目标变量进行对数变换 (核心修改) # ---------------------------------------------------------------------- print("\n--- 目标变量处理 ---") print(f"正在对目标变量 '{target_variable}' 进行 ln(x+1) 对数变换...") # y_transformed 是 ln(agbd + 1) y_transformed_name = f'ln_{target_variable}_plus_1' df_cleaned[y_transformed_name] = np.log1p(df_cleaned[target_variable]) target_variable_use = y_transformed_name # 后续模型将使用这个对数变换后的列 print(f"原始目标变量的均值: {df_cleaned[target_variable].mean():.2f}") print(f"对数变换后目标变量的均值: {df_cleaned[target_variable_use].mean():.2f}") # ---------------------------------------------------------------------- # 4. 特征选择和 XGBoost 模型训练 # ---------------------------------------------------------------------- def calculate_vif(data, features): """计算VIF并返回结果和过滤后的特征列表""" X = data[features].copy() zero_std_features = [f for f in features if X[f].std() == 0] features = [f for f in features if f not in zero_std_features] X = data[features].copy() if X.empty: return pd.DataFrame(), [] X_with_const = add_constant(X, prepend=True, has_constant='skip') vif_data = [] for i in range(X_with_const.shape[1]): feature_name = X_with_const.columns[i] if feature_name == 'const': continue try: feature_index = X.columns.get_loc(feature_name) vif_value = variance_inflation_factor(X_with_const.values, feature_index + 1) vif_data.append({'Feature': feature_name, 'VIF': vif_value}) except Exception: vif_data.append({'Feature': feature_name, 'VIF': np.inf}) vif_df = pd.DataFrame(vif_data) vif_df = vif_df.sort_values('VIF', ascending=False) vif_df['VIF'] = vif_df['VIF'].replace([np.inf, np.nan], 1e15) # 过滤 VIF 极高的特征 (防止模型崩溃或不稳定的结果) features_for_model = vif_df[vif_df['VIF'] < 10000]['Feature'].tolist() return vif_df, features_for_model vif_results, features_for_rf = calculate_vif(df_cleaned, final_features_list) print(f"\n--- XGBoost 建模特征筛选 ---") print(f"原特征数: {len(final_features_list)}, 排除 VIF > 10000 的特征后,剩余特征数: {len(features_for_rf)}") # 准备数据 X = df_cleaned[features_for_rf] y = df_cleaned[target_variable_use] # 划分训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) print("\n--- XGBoost 训练和重要性计算 ---") # 训练 XGBoost 模型 (使用一些基础参数) xgb = XGBRegressor( n_estimators=500, # 增加树的数量 learning_rate=0.05, # 学习率 max_depth=7, # 最大深度 n_jobs=-1, random_state=42, objective='reg:squarederror' # 目标函数 ) xgb.fit(X_train, y_train) r2_score = xgb.score(X_test, y_test) print(f"XGBoost 模型训练完成。测试集 R^2 (基于对数变换值): {r2_score:.4f}") # 1. 计算 F-Score / Gain (MDI) # XGBoost 默认的 feature_importances_ 基于 Gain mdi_importance = pd.Series(xgb.feature_importances_, index=features_for_rf).sort_values(ascending=False) # 2. 计算 %IncMSE (Permutation Importance) perm_importance_result = permutation_importance( xgb, X_test, y_test, n_repeats=10, random_state=42, n_jobs=-1 ) perm_importance = pd.Series( perm_importance_result.importances_mean, index=features_for_rf ).sort_values(ascending=False) # ---------------------------------------------------------------------- # 5. 排序和结果展示 # ---------------------------------------------------------------------- print("\n" + "="*80) print(f"## XGBoost 特征重要性排序结果 (目标变量: {target_variable_use})") print("========================================================================") # 整合结果到 DataFrame importance_df = pd.DataFrame({ 'F-Score (Gain)': mdi_importance, # 相当于 RF 的 IncNodePurity '%IncMSE (Permutation Mean)': perm_importance, # 相当于 RF 的 %IncMSE }) # 排序并展示 F-Score 结果 print("\n--- 1. F-Score (Gain) 排序 (值越大越重要) ---") df_purity = importance_df.sort_values(by='F-Score (Gain)', ascending=False).head(20) print(df_purity['F-Score (Gain)'].round(4)) # 排序并展示 %IncMSE (Permutation Importance) 结果 print("\n--- 2. %IncMSE (Permutation) 排序 (值越大越重要) ---") df_mse = importance_df.sort_values(by='%IncMSE (Permutation Mean)', ascending=False).head(20) print(df_mse['%IncMSE (Permutation Mean)'].round(4)) # ---------------------------------------------------------------------- # 6. 特征重要性可视化 # ---------------------------------------------------------------------- plt.style.use('seaborn-v0_8-whitegrid') plt.rcParams['font.sans-serif'] = ['SimHei', 'Arial Unicode MS'] plt.rcParams['axes.unicode_minus'] = False fig, axes = plt.subplots(1, 2, figsize=(20, 10)) fig.suptitle(f'XGBoost 特征重要性对比 (Top 20) | 模型 R^2 (对数变换后): {r2_score:.4f}', fontsize=16) # 图1: F-Score (Gain) ax1 = axes[0] sns.barplot( x=df_purity['F-Score (Gain)'], y=df_purity.index, ax=ax1, palette="viridis" ) ax1.set_title('F-Score (Gain) - 基于分裂贡献度', fontsize=14) ax1.set_xlabel('重要性值', fontsize=12) ax1.set_ylabel('特征', fontsize=12) # 图2: %IncMSE (Permutation Importance) ax2 = axes[1] sns.barplot( x=df_mse['%IncMSE (Permutation Mean)'], y=df_mse.index, ax=ax2, palette="magma" ) ax2.set_title('%IncMSE (Permutation Importance) - 基于预测精度下降', fontsize=14) ax2.set_xlabel('重要性值', fontsize=12) ax2.set_ylabel('特征', fontsize=12) plt.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.show() print("\n" + "="*80) print("XGBoost + 对数变换 AGBD 特征重要性分析完成!") print("========================================================================") print(f"**结果提示:**") print(f"1. R^2: {r2_score:.4f} (基于 ln(AGBD+1))。请关注 R^2 是否有所提升。") print("2. 重要的特征通常在这两个指标的 Top 10-15 中保持一致。") print("3. Permutation Importance (%IncMSE) 是衡量特征对模型实际预测能力影响的更可靠指标。") runfile('E:/01毕业论文/GEDI/代码/ALL_XGBoost.py', wdir='E:/01毕业论文/GEDI/代码') 正在加载和合并数据... --- 目标变量处理 --- 正在对目标变量 'agbd' 进行 ln(x+1) 对数变换... 原始目标变量的均值: 116.74 对数变换后目标变量的均值: 4.50 --- XGBoost 建模特征筛选 --- 原特征数: 66, 排除 VIF > 10000 的特征后,剩余特征数: 39 --- XGBoost 训练和重要性计算 --- XGBoost 模型训练完成。测试集 R^2 (基于对数变换值): 0.3648 ================================================================================ ## XGBoost 特征重要性排序结果 (目标变量: ln_agbd_plus_1) ======================================================================== --- 1. F-Score (Gain) 排序 (值越大越重要) --- h_canopy 0.2037 SLO 0.1314 Pr_mean_mm 0.0356 ELE 0.0319 VH_contrast 0.0307 Soil_Water_mean_mm 0.0293 VV_dissimilarity 0.0284 GLCM_Contrast 0.0277 VH_dissimilarity 0.0241 VH_mean 0.0241 T_mean_C 0.0237 VV_contrast 0.0219 T_range_C 0.0217 VH_div_VV 0.0208 GLCM_Mean 0.0208 VH_mul_VV 0.0206 REIP 0.0201 VPD_mean_8_hPa 0.0199 Pr_stdev_mm 0.0174 PET_mean_mm 0.0172 Name: F-Score (Gain), dtype: float32 --- 2. %IncMSE (Permutation) 排序 (值越大越重要) --- h_canopy 0.2632 SLO 0.1543 VH_mean 0.0829 ELE 0.0641 MTCI 0.0386 REIP 0.0369 VV_mean 0.0307 GLCM_Mean 0.0202 ASP 0.0191 VH_mul_VV 0.0190 VV_contrast 0.0143 Pr_mean_mm 0.0136 VH_contrast 0.0135 GLCM_Contrast 0.0119 GLCM_Entropy 0.0089 VH_div_VV 0.0075 VV_dissimilarity 0.0072 GLCM_Variance 0.0071 GLCM_Homogeneity 0.0070 GLCM_ASM 0.0065 Name: %IncMSE (Permutation Mean), dtype: float64 ================================================================================ XGBoost + 对数变换 AGBD 特征重要性分析完成! ======================================================================== **结果提示:** 1. R^2: 0.3648 (基于 ln(AGBD+1))。请关注 R^2 是否有所提升。 2. 重要的特征通常在这两个指标的 Top 10-15 中保持一致。 3. Permutation Importance (%IncMSE) 是衡量特征对模型实际预测能力影响的更可靠指标。 R2为什么低?怎么解决?
最新发布
12-12
%% capital coefficient matrix % I get the data of capital stock, investment, and depreciation from % EUKLEMS Capital input database, the raw data is valued in national currency %% major variables obtained in this code file: % euklems_capital_stock, euklems_investment, euklems_depreciation % all in the dimension of 1230 * 1230 * 13 cd ..\data\EUKLEMS\capital %% import data from EUKLEMS Capital Input excels % create matrices to load capital stock data capital_stock_euklems_bycountry (loading the raw data from excel) % each excel representing one country; within each excel file, each sheet representing one asset classes % in each matrix of one sheet, conlumn is for year and row is for using industry % m is a list for assets classes of 8 in euklems data, residential structure is not used, need to check how to deal with the 'other' % gross capital formation is obtained here. % the depreciation, named "consumption of fixed capital in EUKLEMS", labeled as D_ in the excel file, is also obtained here. % all three variables are in 1995 prices %% Asset class for capital stock (real), real investment (capital formation), % nominal investment, depreciation (real), and capital compensation % The code for the countries is based on the alphabetical order of the % acronom of the countries. There are 14 countries contained in the EUKLEMS database. % those excels contains viarous sheet tabs. Those in the list are just the % tab names in all of those excel files. capital_stock_sheets = ["K_IT", "K_CT", "K_Soft", "K_TraEq", "K_OMach", "K_OCon", "K_RStruc", "K_Other"]; real_investment_sheets = ["Iq_IT", "Iq_CT", "Iq_Soft", "Iq_TraEq", "Iq_OMach", "Iq_OCon", "Iq_RStruc", "Iq_Other"]; depreciation_sheets = ["D_IT", "D_CT", "D_Soft", "D_TraEq", "D_OMach", "D_OCon", "D_RStruc", "D_Other"]; nominal_investment_sheets = ["I_IT", "I_CT", "I_Soft", "I_TraEq", "I_OMach", "I_OCon", "I_RStruc", "I_Other"]; capital_compensation_sheets = ["CAP_IT", "CAP_CT", "CAP_Soft", "CAP_TraEq", "CAP_OMach", "CAP_OCon", "CAP_RStruc", "CAP_Other"]; % Country indices, 21 and 24 represent Japan and Korea which have some % different data structure country_excel_indices = [1, 2, 4, 9, 10, 11, 12, 14, 22, 23, 30, 36, 37, 40]; num_countries_euklems = length(country_excel_indices); num_assetsclass = 8; num_years = 13; % need to check where I deal with country 21 and 24 % Initialize 4-dimensional matrices to store data % Calculate the rate of return: compensation over capital stock capital_stock_bycountry=zeros(37, num_years, num_assetsclass, num_countries_euklems); real_investment_bycountry=zeros(37, num_years, num_assetsclass, num_countries_euklems); depreciation_bycountry=zeros(37, num_years, num_assetsclass, num_countries_euklems); nominal_investment_bycountry=zeros(37, num_years, num_assetsclass, num_countries_euklems); capital_compensation_bycountry=zeros(37, num_years, num_assetsclass, num_countries_euklems); % import data from excel using the function table2array for i=1:num_countries_euklems % Create the country-specific Excel file name str_country = [ '',num2str(country_excel_indices(i)),'.xls']; for w=1:num_assetsclass capital_stock_bycountry(:,:,w,i) = readmatrix(str_country,'Sheet',capital_stock_sheets(w),'Range','AB3:AN39'); % AB3:AN39 is from 1995 to 2007 for all industries real_investment_bycountry(:,:,w,i) = readmatrix(str_country,'Sheet',real_investment_sheets(w),'Range','AB3:AN39'); depreciation_bycountry(:,:,w,i)= readmatrix(str_country,'Sheet',depreciation_sheets(w),'Range','AB3:AN39'); nominal_investment_bycountry(:,:,w,i) = readmatrix(str_country,'Sheet',nominal_investment_sheets(w),'Range','AB3:AN39'); capital_compensation_bycountry(:,:,w,i) = readmatrix(str_country,'Sheet',capital_compensation_sheets(w),'Range','AB3:AN39'); end end %% replace the missing value with 0 % Replace infinite values and NaN values in the created matrices capital_stock_bycountry(isinf(capital_stock_bycountry))=0; capital_stock_bycountry(isnan(capital_stock_bycountry))=0; real_investment_bycountry(isinf(real_investment_bycountry))=0; real_investment_bycountry(isnan(real_investment_bycountry))=0; depreciation_bycountry(isinf(depreciation_bycountry))=0; depreciation_bycountry(isnan(depreciation_bycountry))=0; nominal_investment_bycountry(isinf(nominal_investment_bycountry))=0; nominal_investment_bycountry(isnan(nominal_investment_bycountry))=0; capital_compensation_bycountry(isinf(capital_compensation_bycountry))=0; capital_compensation_bycountry(isnan(capital_compensation_bycountry))=0; % now the dimension of the matrix: industry, year, assets class, country % for example, capital_stock_bycountry(:,:,1,1) gives how much IT capital stock that 37 sectors use in production over 13 years in Austria. % Check for NaN or Inf values in the matrices if any(isnan(capital_stock_bycountry(:))) || any(isinf(capital_stock_bycountry(:))) disp('capital_stock_bycountry contains NaN or Inf values'); end if any(isnan(depreciation_bycountry(:))) || any(isinf(depreciation_bycountry(:))) disp('depreciation_bycountry contains NaN or Inf values'); end if any(isnan(real_investment_bycountry(:))) || any(isinf(real_investment_bycountry(:))) disp('real_investment_bycountry contains NaN or Inf values'); end if any(isnan(nominal_investment_bycountry(:))) || any(isinf(nominal_investment_bycountry(:))) disp('real_investment_bycountry contains NaN or Inf values'); end if any(isnan(capital_compensation_bycountry(:))) || any(isinf(capital_compensation_bycountry(:))) disp('capital_compensation_bycountry contains NaN or Inf values'); end %% permutate the capital stock data to one sorted by country and then year and lastly a matrix of asset category and industry % Move the 3rd dimension (asset classes) to the 1st position. % Move the 1st dimension (using industries) to the 2nd position. % Move the 4th dimension (countries) to the 3rd position. % Move the 2nd dimension (years) to the 4th position. % after permutation, dimension: [assets classes, using-industries, using-countries, years] capital_stock_37=permute(capital_stock_bycountry,[3 1 4 2]); real_investment_37=permute(real_investment_bycountry,[3 1 4 2]); depreciation_37=permute(depreciation_bycountry,[3 1 4 2]); nominal_investment_37=permute(nominal_investment_bycountry,[3 1 4 2]); capital_compensation_37=permute(capital_compensation_bycountry,[3 1 4 2]); %% select 30 industries form 37, leaving out the already aggregated thus repeating industries % Create a mapping array for the columns to be selected col_mapping = [1:2, 4:7, 9:19, 21:24, 26:27, 29, 31:32, 34:37]; % select data for capital_stock, investment, and depreciation % selecting_industries is a function defined in a separate file in the folder called function capital_stock_30 = select_industries(capital_stock_37, col_mapping); investment_30 = select_industries(real_investment_37, col_mapping); depreciation_30 = select_industries(depreciation_37, col_mapping); nominal_investment_30 = select_industries(nominal_investment_37, col_mapping); capital_compensation_30 = select_industries(capital_compensation_37, col_mapping); % % clear the temporary variables % clear m i % clear capital_stock_bycountry capital_stock_37 % clear investment_bycountry real_investment_37 % clear depreciation_bycountry depreciation_37 %% extend the capital stock matrix to include countries that are not in the EUKLEMS data % the countries not in the EUKLEMS data are: 3, 5, 6, 7, 8, 13, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 38, 39 % the countries in the EUKLEMS data are: 1, 2, 4, 9, 10, 11, 12, 14, 22, 23, 30, 36, 37, 40 % create a new matrix to include all countries and then either EUKLEMS data OR non-EUKLEMS data in it % define constants and generate temperary matrice to store data num_all_countries = 41; n_industries = 30; capital_stock_euklems_complete_temp = zeros(num_assetsclass, n_industries, num_all_countries, num_years); investment_euklems_complete_temp = zeros(num_assetsclass, n_industries, num_all_countries, num_years); depreciation_euklems_complete_temp = zeros(num_assetsclass, n_industries, num_all_countries, num_years); nominal_investment_euklems_complete_temp = zeros(num_assetsclass, n_industries, num_all_countries, num_years); capital_compensation_euklems_complete_temp = zeros(num_assetsclass, n_industries, num_all_countries, num_years); for i = 1:num_countries_euklems country_idx = country_excel_indices(i); capital_stock_euklems_complete_temp(:, :, country_idx, :) = capital_stock_30(:, :, i, :); investment_euklems_complete_temp(:, :, country_idx, :) = investment_30(:, :, i, :); depreciation_euklems_complete_temp(:, :, country_idx, :) = depreciation_30(:, :, i, :); nominal_investment_euklems_complete_temp(:, :, country_idx, :) = nominal_investment_30(:, :, i, :); capital_compensation_euklems_complete_temp(:, :, country_idx, :) = capital_compensation_30(:, :, i, :); end % create the matrices to store the complete data which extends the asset classes to 30 num_assetsclass_all=30; capital_stock_euklems_complete = zeros(num_assetsclass_all,n_industries,num_all_countries,num_years); investment_euklems_complete = zeros(num_assetsclass_all,n_industries,num_all_countries,num_years); depreciation_euklems_complete = zeros(num_assetsclass_all,n_industries,num_all_countries,num_years); nominal_investment_euklems_complete = zeros(num_assetsclass_all,n_industries,num_all_countries,num_years); capital_compensation_euklems_complete = zeros(num_assetsclass_all,n_industries,num_all_countries,num_years); % need to confirm that communications device is also produced at % electrical, computing equipment industry % update_capital_stock is a function defined in a separate file in the folder called function, which is to allocate the 7 asset classes in EUKLEMS data to 30 asset classes vector in the complete matrices, leaving out the residential structure since it is not used in the production. This leads to the sum of each column the capital matrix not same as the total capital stock noted in the EUKLEMS data. The discrepancy is due to the residential structure. for year = 1:13 for i = 1:14 capital_stock_euklems_complete(:, :, country_excel_indices(i), year) = update_capital_stock_include_resi_structure(capital_stock_euklems_complete_temp, country_excel_indices(i), year); investment_euklems_complete(:, :, country_excel_indices(i), year) = update_capital_stock_include_resi_structure(investment_euklems_complete_temp, country_excel_indices(i), year); depreciation_euklems_complete(:, :, country_excel_indices(i), year) = update_capital_stock_include_resi_structure(depreciation_euklems_complete_temp, country_excel_indices(i), year); nominal_investment_euklems_complete(:, :, country_excel_indices(i), year) = update_capital_stock_include_resi_structure(nominal_investment_euklems_complete_temp, country_excel_indices(i), year); capital_compensation_euklems_complete(:, :, country_excel_indices(i), year) = update_capital_stock_include_resi_structure(capital_compensation_euklems_complete_temp, country_excel_indices(i), year); end end % clear capital_stock_euklems_30 depreciation_euklems_30 investment_euklems_30 % clear capital_stock_euklems_complete_temp depreciation_euklems_complete_temp investment_euklems_complete_temp %% using the capital coefficient matrix of italy and the adjusting factor (division of total capital stock intensity of the country over italy, numbered 22 % to get the capital coefficent matrix of other countries. The capital % stock total is from WIOD SEA. % adding the invesmtent and deprecation to the loop or there is other % better ways of doing this, check later cd ..\..\ wiod_k = readmatrix("k_gfcf_all.xlsx","Range",'E2:Q1441'); othercountries=[3,5,6,7,8,13,15,16,17,18,19,20,21,24,25,26,27,28,29,31,32,33,34,35,38,39]; for m=1:13 k_temp = wiod_k(:,m); for j=othercountries % Calculate the adjustment factor factor_temp=k_temp(36*(j-1)+2:36*(j-1)+35,:).'./ k_temp(36*21+2:36*21+35,:).'; factor=[factor_temp(:,1:3) sum(factor_temp(:,4:5),2)/2 factor_temp(:,6:22) sum(factor_temp(:,23:26),2)/4 factor_temp(:,27:34)]; % Update the capital coefficient matrix for other countries capital_stock_euklems_complete(:,:,j,m) =capital_stock_euklems_complete(:,:,22,m).*factor; investment_euklems_complete(:,:,j,m) = investment_euklems_complete(:,:,22,m).*factor; depreciation_euklems_complete(:,:,j,m) = depreciation_euklems_complete(:,:,22,m).*factor; capital_compensation_euklems_complete(:,:,j,m) = capital_compensation_euklems_complete(:,:,22,m).*factor; end end % until now, types of the assets is disaggregated for each using industry. %% adding the origin of each asset types in the capital coefficient matrix % Using the information of the final destination of the investment goods % in the WIOT data to . % with additional information, I convert the stock matrix complete3 into % 30*1230*13, need some some codes below to implement this %% adding the origin of each asset types in the capital coefficient matrix % Using the information of the final destination of the investment goods % in the WIOT data to . %with additional information, I convert the stock matrix complete3 into %30*1230*13, need some some codes below to implement this capital_stock_euklems_complete4=reshape(capital_stock_euklems_complete,[30,1230,13]); cd matlab\ load('wiots') year=(1995:2009); % convert each column into a 41*30 matrix capital_stock_euklems_complete_factor=zeros(30,41,1230,13); for i=1:13 wiot_nominal=WIOTs.(strcat('WIOT_',num2str(year(i)))); wiot_nominal_final = wiot_nominal(1:1435,1436:1640); % I care more about column here rather than row wiot_nominal_investment = wiot_nominal_final(:,4:5:end); %+ wiot_nominal_final(:,5:5:end) ; % now I have 41 columns for each year for 41 countries 41*1230 % above is a 1230*41 matrix, first I need to transpose this matrix to % for each country, and each asset class, I know the share of the % sourcing coutry % before, for each country, and each using industry, I know the share % of assets classes used wiot_nominal_investment_new=reshape(wiot_nominal_investment,[35,41,41]); wiot_nominal_investment_new_new= permute(wiot_nominal_investment_new,[2,1,3]); wiot_nominal_investment_new3=zeros(41,30,41); for m=1:41 wiot_nominal_investment_new_new_temp=wiot_nominal_investment_new_new(:,:,m); wiot_nominal_investment_new_new_temp2=[wiot_nominal_investment_new_new_temp(:,1:3) sum(wiot_nominal_investment_new_new_temp(:,4:5),2) wiot_nominal_investment_new_new_temp(:,6:22) sum(wiot_nominal_investment_new_new_temp(:,23:26),2) wiot_nominal_investment_new_new_temp(:,27:34)]; wiot_nominal_investment_new3(:,:,m)= wiot_nominal_investment_new_new_temp2; end wiot_nominal_investment_new_3_sum=sum(wiot_nominal_investment_new3,1); wiot_nominal_investment_new4=repelem(wiot_nominal_investment_new_3_sum,41,1,1); wiot_nominal_investment_new_ratio=wiot_nominal_investment_new3./wiot_nominal_investment_new4; wiot_nominal_investment_new5=repelem(wiot_nominal_investment_new_ratio,1,1,30); wiot_nominal_investment_new6= permute(wiot_nominal_investment_new5,[2,1,3]); capital_stock_euklems_complete_factor(:,:,:,i)=wiot_nominal_investment_new6; end capital_stock_euklems_complete_factor_new=reshape(capital_stock_euklems_complete_factor,[1230,1230,13]); capital_stock_euklems_complete5 = repelem(capital_stock_euklems_complete4,41,1,1); capital_stock_euklems_complete5_temp=reshape(capital_stock_euklems_complete5,[41,30,1230,13]); capital_stock_euklems_complete5_temp2 = permute(capital_stock_euklems_complete5_temp,[2,1,3,4]); capital_stock_euklems_complete6=reshape(capital_stock_euklems_complete5_temp2,[1230,1230,13]); euklems_capital_stock=capital_stock_euklems_complete6.*capital_stock_euklems_complete_factor_new; % here is the gross capital formation, which is essential investment % Real gross fixed capital formation, 1995 prices % following the same factor to extend the matrix for investment and % depreciation % investment investment_euklems_complete4=reshape(investment_euklems_complete,[30,1230,13]); investment_euklems_complete_factor_new=reshape(capital_stock_euklems_complete_factor,[1230,1230,13]); investment_euklems_complete5 = repelem(investment_euklems_complete4,41,1,1); investment_euklems_complete5_temp=reshape(investment_euklems_complete5,[41,30,1230,13]); investment_euklems_complete5_temp2 = permute(investment_euklems_complete5_temp,[2,1,3,4]); investment_euklems_complete6=reshape(investment_euklems_complete5_temp2,[1230,1230,13]); euklems_investment=investment_euklems_complete6.*investment_euklems_complete_factor_new; % depreciation depreciation_euklems_complete4=reshape(depreciation_euklems_complete,[30,1230,13]); depreciation_euklems_complete_factor_new=reshape(capital_stock_euklems_complete_factor,[1230,1230,13]); depreciation_euklems_complete5 = repelem(depreciation_euklems_complete4,41,1,1); depreciation_euklems_complete5_temp=reshape(depreciation_euklems_complete5,[41,30,1230,13]); depreciation_euklems_complete5_temp2 = permute(depreciation_euklems_complete5_temp,[2,1,3,4]); depreciation_euklems_complete6=reshape(depreciation_euklems_complete5_temp2,[1230,1230,13]); euklems_depreciation=depreciation_euklems_complete6.*depreciation_euklems_complete_factor_new; euklems_capital_stock(isinf(euklems_capital_stock))=0; euklems_capital_stock(isnan(euklems_capital_stock))=0; euklems_investment(isinf(euklems_investment))=0; euklems_investment(isnan(euklems_investment))=0; euklems_depreciation(isinf(euklems_depreciation))=0; euklems_depreciation(isnan(euklems_depreciation))=0; % clear i j m % clear capital_stock_euklems_complete capital_stock_euklems_complete4 capital_stock_euklems_complete5 capital_stock_euklems_complete5_temp capital_stock_euklems_complete5_temp2 capital_stock_euklems_complete6 % clear capital_stock_euklems_complete_factor capital_stock_euklems_complete_factor_new % clear col_mapping % clear depreciation_euklems_complete depreciation_euklems_complete4 depreciation_euklems_complete5 % clear depreciation_euklems_complete5_temp depreciation_euklems_complete5_temp2 depreciation_euklems_complete6 % clear depreciation_euklems_complete_factor_new factor factor_temp % clear investment_euklems_complete investment_euklems_complete4 investment_euklems_complete5 % clear investment_euklems_complete5_temp investment_euklems_complete5_temp2 investment_euklems_complete6 % clear investment_euklems_complete_factor_new k_other k_temp temp % clear wiod_k wiot_new wiot_new2 wiot_new3 wiot_nominal wiot_nominal_final % clear wiot_nominal_investment_new wiot_nominal_investment_new3 wiot_nominal_investment_new4 wiot_nominal_investment_new5 % clear wiot_nominal_investment_new6 wiot_nominal_investment_new_3_sum wiot_nominal_investment_new_new wiot_nominal_investment_new_new_temp % clear wiot_nominal_investment_new_new_temp2 wiot_nominal_investment_new_ratio wiot_tm wiot_tm_temp1 % clear wiotable cd '..\..\code'
06-18
用C++编写程序,实现以下问题2、题目ID Codes(POJ1146) Time Limit: 1000MS Memory Limit: 10000K 描述: It is 2084 and the year of Big Brother has finally arrived, albeit a century late. In order to exercise greater control over its citizens and thereby to counter a chronic breakdown in law and order, the Government decides on a radical measure--all citizens are to have a tiny microcomputer surgically implanted in their left wrists. This computer will contains all sorts of personal information as well as a transmitter which will allow people's movements to be logged and monitored by a central computer. (A desirable side effect of this process is that it will shorten the dole queue for plastic surgeons.) An essential component of each computer will be a unique identification code, consisting of up to 50 characters drawn from the 26 lower case letters. The set of characters for any given code is chosen somewhat haphazardly. The complicated way in which the code is imprinted into the chip makes it much easier for the manufacturer to produce codes which are rearrangements of other codes than to produce new codes with a different selection of letters. Thus, once a set of letters has been chosen all possible codes derivable from it are used before changing the set. For example, suppose it is decided that a code will contain exactly 3 occurrences of a', 2 of b' and 1 of c', then three of the allowable 60 codes under these conditions are: abaabc abaacb ababac These three codes are listed from top to bottom in alphabetic order. Among all codes generated with this set of characters, these codes appear consecutively in this order. Write a program to assist in the issuing of these identification codes. Your program will accept a sequence of no more than 50 lower case letters (which may contain repeated characters) and print the successor code if one exists or the message No Successor' if the given code is the last in the sequence for that set of characters. 输入: Input will consist of a series of lines each containing a string representing a code. The entire file will be terminated by a line consisting of a single #. 输出: Output will consist of one line for each code read containing the successor code or the words 'No Successor'. 样例输入 abaacb cbbaa # 样例输出 ababac No Successor
05-22
np.random.permutation是NumPy库中的一个函数,用于对一维或多维数组进行随机排列,返回一个打乱顺序后的数组副本,是处理随机化和洗牌数据的有用工具之一,在数据科学和机器学习领域经常被使用[^1]。 ### 使用方法 #### 对一维数组进行随机排列 ```python import numpy as np x = np.array([1, 5, 6, 3, 4]) permutation1 = np.random.permutation(x) print(permutation1) ``` 上述代码中,将一维数组`x`传入`np.random.permutation`函数,得到一个随机排列后的一维数组副本`permutation1`并打印输出。 #### 传入整数参数 ```python import numpy as np permutation2 = np.random.permutation(7) print(permutation2) ``` 当传入一个整数`n`时,函数会生成一个从`0`到`n-1`的整数数组,并对其进行随机排列。 #### 对多维数组进行随机排列 ```python import numpy as np x = np.array([[1, 9, 6], [1, 1, 1], [9, 5, 2], [8, 8, 8]]) permutation3 = np.random.permutation(x) print(permutation3) ``` 对于多维数组,`np.random.permutation`函数会对多维数组的第一维进行随机排列,而不是逐元素打乱。 #### 固定随机结果 ```python import numpy as np np.random.seed(0) matrix = np.arange(9).reshape(3, 3) print(np.random.permutation(matrix)) ``` 若需固定随机结果,需提前设置随机种子,如`np.random.seed(0)`,这样每次运行代码时,随机排列的结果都是相同的。 #### 对输入数据X、Y进行随机排序且保持对应关系 ```python import numpy as np # 假设m为样本数 m = 5 X = np.array([1, 2, 3, 4, 5]) Y = np.array([6, 7, 8, 9, 10]) permutation = list(np.random.permutation(m)) shuffled_X = X[permutation] shuffled_Y = Y[permutation].reshape((1, m)) print(shuffled_X) print(shuffled_Y) ``` 可以通过生成一个随机排列的索引列表,然后使用该列表对输入数据`X`和`Y`进行索引,从而实现随机排序且保持`X`和`Y`中值的对应关系。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值