Exists Versus In

本文对比了SQL查询中使用exists和check的区别,着重于效率和使用场景,指出exists通常在性能上优于使用子查询的check。
1 exists just checks for the existence of rows,whereas in checks the acutal values
2 exists typically offers better performence than in with subquery
内容概要:本文详细介绍了“秒杀商城”微服务架构的设计与实战全过程,涵盖系统从需求分析、服务拆分、技术选型到核心功能开发、分布式事务处理、容器化部署及监控链路追踪的完整流程。重点解决了高并发场景下的超卖问题,采用Redis预减库存、消息队列削峰、数据库乐观锁等手段保障数据一致性,并通过Nacos实现服务注册发现与配置管理,利用Seata处理跨服务分布式事务,结合RabbitMQ实现异步下单,提升系统吞吐能力。同时,项目支持Docker Compose快速部署和Kubernetes生产级编排,集成Sleuth+Zipkin链路追踪与Prometheus+Grafana监控体系,构建可观测性强的微服务系统。; 适合人群:具备Java基础和Spring Boot开发经验,熟悉微服务基本概念的中高级研发人员,尤其是希望深入理解高并发系统设计、分布式事务、服务治理等核心技术的开发者;适合工作2-5年、有志于转型微服务或提升架构能力的工程师; 使用场景及目标:①学习如何基于Spring Cloud Alibaba构建完整的微服务项目;②掌握秒杀场景下高并发、超卖控制、异步化、削峰填谷等关键技术方案;③实践分布式事务(Seata)、服务熔断降级、链路追踪、统一配置中心等企业级中间件的应用;④完成从本地开发到容器化部署的全流程落地; 阅读建议:建议按照文档提供的七个阶段循序渐进地动手实践,重点关注秒杀流程设计、服务间通信机制、分布式事务实现和系统性能优化部分,结合代码调试与监控工具深入理解各组件协作原理,真正掌握高并发微服务系统的构建能力。
3.1 Explain the feature selection process using the three categories of feature selection methods, step by step. 3.2.1 Data Preprocessing • Handling Missing Values: Fill in or delete missing data. • Standardization/Normalization: Since feature scales differ, features need to be standardized (e.g., Z-score standardization) or normalized (e.g., Min-Max normalization). • Feature Engineering: New features may need to be created, such as technical indicators (moving averages, Relative Strength Index RSI, etc.) or lag features (closing prices from previous days, etc.). 3.2.2 Filter Method – Rapid Preliminary Screening The filter method is model-independent and quantifies the correlation between features and the target variable based on statistical metrics. We use: • Pearson correlation coefficient: Measures the linear correlation between features and the target (future closing price). • Variance threshold: Removes constant features with variance close to zero (e.g., a feature where all samples have the same value). Specific steps: Calculate the Pearson correlation coefficient between each feature and the target (future closing price): Set the threshold (adjusted according to cross-validation) to retain features with significant correlation Calculate the variance of each feature , set a variance threshold (e.g., ), and remove features with low variance. Output the initial filtered feature subset S_filter 3.2.3 Wrapper Method (WRAPPER METHOD) — Based on Model Performance Optimization The wrapper method evaluates the performance of feature subsets (such as Mean Squared Error MSE) by training a model. We adopt: Recursive Feature Elimination (RFE): Combines a linear regression model to iteratively remove the least important features Specific steps: Train an initial model (such as linear regression) based on the S_filter, and calculate feature importance (weight coefficients). Recursive Feature Elimination: Iteratively remove the least important feature (with the smallest weight), retrain the model, and evaluate performance using cross-validated Mean Squared Error (MSE). Sort the features by weight coefficient and remove the feature with the smallest weight (e.g., remove 1 feature each time). Determine the optimal number of features: Calculate the MSE for each feature subset using cross-validation,Select the subset 𝑆 wrapper that minimizes the cross-validation MSE (for example, by plotting the curve of the number of features versus MSE). Output the feature S_wrapper that minimizes the MSE 3.2.4 Embedded Method — Feature Selection within the Model The embedded method incorporates feature selection into the model training process. We adopt: LASSO Regression (L1 Regularization): Makes the weights of some features zero through a penalty term Specific steps: 1.Train a LASSO model on the S_wrapper where Select the optimal regularization strength λ through 5-fold cross-validation (to minimize the validation set MSE). Extract non-zero weight features as the final set 𝑆_final. 结合一下程序import os import json import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.feature_selection import RFE from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score from sklearn.pipeline import Pipeline from sklearn.linear_model import ElasticNetCV from sklearn.preprocessing import StandardScaler def load_data(): data_path_csv = 'data.csv' data_path_xlsx = 'hk_02013__last_2080d.xlsx' if os.path.exists(data_path_csv): df = pd.read_csv(data_path_csv) elif os.path.exists(data_path_xlsx): df = pd.read_excel(data_path_xlsx) else: raise SystemExit("Save the data as data.csv or data.xlsx,and place it in the same directory as the script.") return df def prepare_features(df): # Convert the date column to datetime type if 'data' in df.columns: # If the date is an Excel serial number, convert it to an actual date if df['data'].dtype.kind in 'iufc': # 整Integer/Float df['data'] = pd.to_datetime(df['data'], unit='D', origin='1899-12-30') else: df['data'] = pd.to_datetime(df['data'], errors='coerce') # Sort by date to ensure chronological order if 'data' in df.columns: df = df.sort_values('data').reset_index(drop=True) # Provided data column names(OHLCV etc.) feature_cols = ['open price', 'Close price', 'daily highest price', 'daily Lowest price', ' trading volume', ' trading turnover', 'Stock Amplitude‌', 'Stock price limit', 'Stock price change', 'Stock turnover rate'] # Force-convert the feature column to a numeric type, converting non-numeric values to NaN X_all = df[feature_cols].apply(pd.to_numeric, errors='coerce') # 未Use the closing price of one time step as the target variable (the closing price of the next day) df['Future closing price'] = df['Close price'].shift(-1) # Construct the final feature matrix X and target y, and filter out rows with NaN values X_all = df[feature_cols] mask = X_all.notnull().all(axis=1) & df['Future closing price'].notnull() X = X_all[mask].astype(float) y = df.loc[mask, 'Future closing price'].astype(float) feature_names = feature_cols return X.values, y.values, feature_names def main(): df = load_data() X, y, feature_names = prepare_features(df) n_features = X.shape[1] # It should be 10 # Step 1: Take approximately 70% of the complete feature set as candidate features step1_size = max(1, int(n_features * 0.7)) rng = np.random.default_rng(42) all_indices = np.arange(n_features) step1_indices = np.sort(rng.choice(all_indices, size=step1_size, replace=False)) step1_candidates = [feature_names[i] for i in step1_indices] step1_removed = [feature_names[i] for i in all_indices if i not in step1_indices] print("Step 1 - Removed features (full set -> Step1 candidates):", step1_removed) print(" Count removed:", len(step1_removed)) print(" Example removed:", step1_removed[:5] if step1_removed else []) print("") # Data splitting X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) X_train_step1 = X_train[:, step1_indices] X_test_step1 = X_test[:, step1_indices] # Step 2: Wrapper(RFE,Feature Importance Screening Based on RF) RF_est = RandomForestRegressor(n_estimators=300, random_state=42, n_jobs=-1) n_features_wrapper = max(1, int(len(step1_candidates) * 0.7)) rfe = RFE(RF_est, n_features_to_select=n_features_wrapper, step=1) rfe.fit(X_train_step1, y_train) step2_mask = rfe.support_ selected_step2 = [step1_candidates[i] for i, m in enumerate(step2_mask) if m] step2_removed_list = [step1_candidates[i] for i, m in enumerate(step2_mask) if not m] print("Step 2 - Wrapper (RFE) Selected features:", selected_step2) print("Step 2 - Wrapper (RFE) Result Analysis:") print(f" Total number of features (Step1 candidates): {len(step1_candidates)}") print(f" Number of selected features (Step2): {len(selected_step2)}") print(f" Delete feature count (Step2): {len(step2_removed_list)}") print(" Reason for deletion: Use Recursive Feature Elimination (RFE) based on the Random Forest estimator to measure feature importance and remove features that contribute less to the target prediction.") if step2_removed_list: print(" Example of deleted features:", step2_removed_list[:5]) else: print(" No features to be deleted were detected.") print("") # Step 3:Embedded screening based on the results of Step 2 (ElasticNetCV) selected_step2_indices = [step1_indices[i] for i, m in enumerate(step2_mask) if m] X_train_step2 = X_train[:, selected_step2_indices] X_test_step2 = X_test[:, selected_step2_indices] selected_step2_names = [feature_names[i] for i in selected_step2_indices] enet_cv = ElasticNetCV(l1_ratio=[0.5, 0.7, 0.9, 1.0], cv=5, random_state=42) pipe_enet = Pipeline([('scaler', StandardScaler()), ('enet', enet_cv)]) pipe_enet.fit(X_train_step2, y_train) coefs = pipe_enet.named_steps['enet'].coef_ selected_step3 = [name for name, c in zip(selected_step2_names, coefs) if abs(c) > 1e-6] if len(selected_step3) == 0: selected_step3 = selected_step2_names[:1] step3_removed = [n for n in selected_step2_names if n not in selected_step3] print("Step 3 - Embedded (ElasticNetCV) the finally selected features:", selected_step3) print("Step 3 - Embedded (ElasticNetCV) result Analysis:") print(f" Number of features after ElasticNetCV screening: {len(selected_step3)}") print(f" Number of original candidate features (Step2): {len(selected_step2_names)}") print(f" Delete feature count (Step3): {len(step3_removed)}") print(" Reason for deletion: ElasticNetCV Shrink some coefficients to 0 under regularization,Indicates a minor contribution to the prediction,Therefore, it was excluded.") if step3_removed: print(" Example of deleted features:", step3_removed[:5]) else: print(" No features to be deleted were detected.") print("") final_features = selected_step3 final_indices = [step1_indices[i] for i, f in enumerate(step1_candidates) if f in final_features] X_train_final = X_train[:, final_indices] X_test_final = X_test[:, final_indices] # 5) Final Model Training and Evaluation final_model = RandomForestRegressor(n_estimators=500, random_state=42, n_jobs=-1) final_model.fit(X_train_final, y_train) y_pred = final_model.predict(X_test_final) mse = mean_squared_error(y_test, y_pred) rmse = np.sqrt(mse) mae = mean_absolute_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) print("\nFinal Feature Subset:", final_features) print("Test set evaluation:") print(" RMSE:", rmse) print(" MAE :", mae) print(" R^2 :", r2) # 6) Save the result as structured JSON results = { "step1_removed": { "features": step1_removed, "count": len(step1_removed), "example": step1_removed[:5] if step1_removed else [] }, "step2_removed": { "features": step2_removed_list, "count": len(step2_removed_list), "example": step2_removed_list[:5] if step2_removed_list else [], "reason": "RF Recursive Feature Elimination based on Feature Importance, which removes features that contribute less to the target prediction." }, "step3_removed": { "features": step3_removed, "count": len(step3_removed), "example": step3_removed[:5] if step3_removed else [], "reason": "ElasticNetCV shrinks some coefficients to zero under regularization, indicating that they contribute little to the prediction and are therefore eliminated." }, "final_features": final_features, "metrics": { "rmse": rmse, "mae": mae, "r2": r2 } } with open('feature_selection_results.json', 'w', encoding='utf-8') as f_out: json.dump(results, f_out, ensure_ascii=False, indent=4) print("\The result has been saved to feature_selection_results.json") if __name__ == "__main__": main() . Feature Selection Using the Funnelling Approach [20 marks] 3. Perform feature selection for a machine learning model using a multi-step process by combining techniques from filter, wrapper, and embedded methods. (a) Explain the feature selection process using the three categories of feature selection methods, step by step. (b) Justify the selection of features retained at each step. (c) Provide the final list of selected features. 你是CQF专家,请结合以上三段内容,给出完整的解题步骤、解题过程和答案,并设计完整的Python程序实现题目要求
10-20
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值