赛题描述:根据提供的用户行为数据,选手需要分析用户行为特征与广告内容的匹配关系,准确预测用户对测试集广告的点击情况,通过AUC计算得分。
得分0.6120,排名60+。
尝试了很多模型都没有能够提升效果,好奇大佬的代码是咋写的。
分享一下思路:
特征处理
时间特征是大多数广告点击预测任务中的核心因素。用户在不同时间段的行为差别较大(比如:晚上适合网易云)。
从曝光时间中提取出了,week,hour,hour_m,cos_hour,day_of_week
特征,将一天划分成了四个时间段:早上、下午、晚上、夜晚,增加了一个工作时间的判断。
data['exposure_time'] = pd.to_datetime(data['exposure_time'])
data['week'] = data['exposure_time'].dt.isocalendar().week
data['hour'] = data['exposure_time'].dt.hour
data['hour_m'] = data['hour'] + data['exposure_time'].dt.minute / 60
data['cos_hour'] = np.cos(2 * np.pi * data['hour_m'] / 24)
data['day_of_week'] = data['exposure_time'].dt.dayofweekdef get_time_period(hour):if 6 <= hour < 12:return 'morning'elif 12 <= hour < 18:return 'afternoon'elif 18 <= hour < 24:return 'evening'else:return 'night'
data['time_period'] = data['hour'].apply(get_time_period)
data['is_work_time'] = data['hour'].apply(lambda x: 1 if 9 <= x < 17 else 0)
除此之外,增添了两个新的特征。
purchase_efficiency
:购买效率。
ad_quality_score
:广告质量。
data['purchase_efficiency'] = data['purchase_history'] / (data['activity_score'] + 1e-6)
data['ad_quality_score'] = data['advertiser_score'] * data['historical_ctr']
并对职业、地区、广告类型等数据使用了LabelEncoder
编码。
label_encoders = {}
for col in ['occupation', 'category', 'material_type', 'region', 'device', 'time_period']:le = LabelEncoder()data[col] = le.fit_transform(data[col])label_encoders[col] = le
对于职业、地区、设备等数据就行了频率编码,捕捉类别的热门程度。
data['purchase_efficiency'] = data['purchase_history'] / (data['activity_score'] + 1e-6)
data['ad_quality_score'] = data['advertiser_score'] * data['historical_ctr']
创建了三个交互特征:职业-广告类型,设备-广告类型,地区-商品材质。
data['occupation_category'] = data['occupation'].astype(str) + '_' + data['category'].astype(str)
data['region_material_type'] = data['region'].astype(str) + '_' + data['material_type'].astype(str)
data['device_category'] = data['device'].astype(str) + '_' + data['category'].astype(str)
对purchase_history,activity_score
进行分箱,减少对异常值的敏感。
bins_purchase = [0, 1, 5, 10, 20, 50, 100]
labels_purchase = [0, 1, 2, 3, 4, 5]
data['purchase_history_bin'] = pd.cut(data['purchase_history'], bins=bins_purchase, labels=labels_purchase, include_lowest=True)bins_activity = [0, 10, 20, 30, 40, 50, 100]
labels_activity = [0, 1, 2, 3, 4, 5]
data['activity_score_bin'] = pd.cut(data['activity_score'], bins=bins_activity, labels=labels_activity, include_lowest=True)
模型参数设置
使用LightGBM模型进行训练。
params = {'boosting_type': 'gbdt','objective': 'binary','metric': 'auc','num_leaves': 63,'learning_rate': 0.01,'feature_fraction': 0.8,'bagging_fraction': 0.8,'bagging_freq': 5,'verbose': -1,'n_estimators': 5000,'n_jobs': -1
}
使用 StratifiedKFold 进行交叉验证,保证每个折中的正负样本比例相似。每个折内,我们训练一个LightGBM模型,并计算每个折的AUC。
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
oof_preds = np.zeros(len(df_train))
test_preds = np.zeros(len(df_test))
auc_scores = []for fold, (train_idx, val_idx) in enumerate(skf.split(df_train, df_train[label])):X, X_val = df_train[feats].iloc[train_idx], df_train[feats].iloc[val_idx]y, y_val = df_train[label].iloc[train_idx], df_train[label].iloc[val_idx]model = LGBMClassifier(**params)model.fit(X, y, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=200)val_pred = model.predict_proba(X_val)[:, 1]auc = roc_auc_score(y_val, val_pred)auc_scores.append(auc)