One hot encoding
one hot encoding creates new (binary) columns, indicating the presence of each possible value from the original data.
## explore the data type
print(train_data.dtypes)
## one hot encoding
one_hot_encoding_train_predictors = pd.get_dummies(train_predictors)
Scikit-learn is sensitive to the ordering of columns, so if a categorial has a different number of values in the training data vs the test data, the results will be nonsense.
To ensure the test data is encoded in the same manner as the training data with the align command:
one_hot_encoded_train_predictors = pd.get_dummies(train_predictors)
one_hot_encoded_test_predictors = pd.get_dummies(test_predictors)
final_train, final_test = one_hot_encoded_training_predictors.align(one_hot_encoded_test_predictors, join='left', axis=1)
## join = 'left': do the equivalent of SQL's left join.
Further learning
Pipelines: scikit-learn offer a class for one hot encoding, and this can be added to a pipeline.
Applications To Text for Deep Learning: Keras and TensorFlow have fuctionality for one-hot encoding, which is useful for working with text.
Categoricals with Many Values: Scikit-learn’s FeatureHasher uses the hashing trick to store high-dimensional data. This will add some complexity to your modeling code.

本文深入探讨了One-Hot编码的原理与应用,通过实际案例展示了如何使用pandas和scikit-learn进行One-Hot编码,确保训练和测试数据的一致性。同时,介绍了One-Hot编码在深度学习和文本处理中的作用,以及面对大量类别值时的解决方案。
41

被折叠的 条评论
为什么被折叠?



