
作者:louwill;转载自:机器学习实验室
from sklearn import preprocessingle = preprocessing.LabelEncoder()le.fit(['undergraduate', 'master', 'PhD', 'Postdoc'])le.transform(['undergraduate', 'master', 'PhD', 'Postdoc'])
array([3, 2, 0, 1], dtype=int64)
独热编码:One-hot EncodingOne-hot编码应该是应用最广泛的类别特征编码方式了。假设一个类别特征有m个类别取值,通过One-hot编码我们可以将其转换为m个二元特征,每个特征对应该取值类别。
import pandas as pddf = pd.DataFrame({'f1':['A','B','C'], 'f2':['Male','Female','Male']})df = pd.get_dummies(df, columns=['f1', 'f2'])df
from sklearn.preprocessing import OneHotEncoderenc = OneHotEncoder(handle_unknown='ignore')X = [['Male', 1], ['Female', 3], ['Female', 2]]enc.fit(X)enc.transform([['Female', 1], ['Male', 4]]).toarray()
array([[1., 0., 1., 0., 0.],[0., 1., 0., 0., 0.]])
目标变量编码:Target EncodingTarget Encoding就是用目标变量的类别均值来给类别特征做编码。CatBoost中就大量使用目标变量统计的方法来对类别特征编码。但在实际操作时,直接用类别均值替换类别特征的话,会造成一定程度的标签信息泄露的情况,主流方法是使用两层的交叉验证来计算目标均值。Target Encoding一般适用于类别特征无序且类别取值数量大于5个的情形。参考代码如下:### 该代码来自知乎专栏:### https://zhuanlan.zhihu.com/p/40231966from sklearn.model_selection import KFoldn_folds = 20n_inner_folds = 10likelihood_encoded = pd.Series()likelihood_coding_map = {}# global prior meanoof_default_mean = train[target].mean() kf = KFold(n_splits=n_folds, shuffle=True)oof_mean_cv = pd.DataFrame()split = 0for infold, oof in kf.split(train[feature]):print ('==============level 1 encoding..., fold %s ============' % split)inner_kf = KFold(n_splits=n_inner_folds, shuffle=True)inner_oof_default_mean = train.iloc[infold][target].mean()inner_split = 0inner_oof_mean_cv = pd.DataFrame()likelihood_encoded_cv = pd.Series()for inner_infold, inner_oof in inner_kf.split(train.iloc[infold]):print ('==============level 2 encoding..., inner fold %s ============' % inner_split) # inner out of fold meanoof_mean = train.iloc[inner_infold].groupby(by=feature)[target].mean() # assign oof_mean to the infoldlikelihood_encoded_cv = likelihood_encoded_cv.append(train.iloc[infold].apply(lambda x : oof_mean[x[feature]]if x[feature] in oof_mean.indexelse inner_oof_default_mean, axis = 1))inner_oof_mean_cv = inner_oof_mean_cv.join(pd.DataFrame(oof_mean), rsuffix=inner_split, how='outer')inner_oof_mean_cv.fillna(inner_oof_default_mean, inplace=True)inner_split += 1oof_mean_cv = oof_mean_cv.join(pd.DataFrame(inner_oof_mean_cv), rsuffix=split, how='outer')oof_mean_cv.fillna(value=oof_default_mean, inplace=True)split += 1print ('============final mapping...===========')likelihood_encoded = likelihood_encoded.append(train.iloc[oof].apply(lambda x: np.mean(inner_oof_mean_cv.loc[x[feature]].values)if x[feature] in inner_oof_mean_cv.indexelse oof_default_mean, axis=1))
模型自动编码在LightGBM和CatBoost等算法中,模型可以直接对类别特征进行编码,实际使用时直接将类别特征标记后传入对应的api即可。一个示例代码如下:lgb_train = lgb.Dataset(train2[features], train2['total_cost'], categorical_feature=['sex'])
总结根据本文的梳理,可总结机器学习中类别特征的编码方式如下:Label Encoding
类别特征内部有序
One-hot Encoding
类别特征内部无序
类别数值<5
Target Encoding
类别特征内部无序
类别数值>5
模型自动编码
LightGBM
CatBoost
排版:喵君姐姐
转载自:机器学习实验室


因为微信更改了推送规则,如果不想错过我们的精彩内容,请点『在看』以及星标⭐我们呦!


