kaggle视频行为分析1st and Future - Player Contact Detection

这次比赛的目标是检测美式橄榄球NFL比赛中球员经历的外部接触。您将使用视频和球员追踪数据来识别发生接触的时刻,以帮助提高球员的安全。两种接触,一种是人与人的,另一种是人与地面,不包括脚底和地面的,跟我之前做的这个是同一个主办方举行的

kaggle视频追踪NFL Health & Safety - Helmet Assignment-CSDN博客

之前做的是视频追踪,用的deepsort,这一场比赛用的2.5DCNN。

EDA部分

eda可以参考这一个notebook,用的fasteda,挺方便的

NFL Player Contact Detection EDA 🏈 | Kaggle

视频数据在test和train文件夹里面,还提供了这一个train_baseline_helmets.csv,是由上一次比赛的冠军方案产生的,是我之前做的视频追踪,train_player_tracking.csv 的频率是10HZ,视频是59.94HZ,之后要进行转换,snap 事件也就是比赛开始发生在视频的第五秒

train_labels.csv

  • step: A number representing each each timestep for each play, starting at 0 at the moment of the play starting, and incrementing by 1 every 0.1 seconds.
  • 之前说的比赛第5秒开始,一个step是0.1秒

接触发生以10HZ记录

[train/test]_player_tracking.csv

  • datetime: timestamp at 10 Hz.

[train/test]_video_metadata.csv

be used to sync with player tracking data.和视频是同步的

训练部分

我自己租卡跑,20多个小时,10个epoch,我上传到kaggle,链接如下

track_weight | Kaggle

额外要用的一个数据集如下,我用的的4090显卡20核跑的,你要自己训练的话要自己修改一下

timm-0.6.9 | Kaggle

导入包

import os
import sys
import glob
import numpy as np
import pandas as pd
import random
import math
import gc
import cv2
from tqdm import tqdm
import time
from functools import lru_cache
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import Dataset, DataLoader
from torch.cuda.amp import autocast, GradScaler
import timm
import albumentations as A
from albumentations.pytorch import ToTensorV2
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from timm.scheduler import CosineLRScheduler
sys.path.append('../input/timm-0-6-9/pytorch-image-models-master')

配置

CFG = {'seed': 42,'model': 'convnext_small.fb_in1k','img_size': 256,'epochs': 10,'train_bs': 48, 'valid_bs': 32,'lr': 1e-3, 'weight_decay': 1e-6,'num_workers': 20,'max_grad_norm' : 1000,'epochs_warmup' : 3.0
}

我用的convnext,这个网络是原本的cnn根据vit模型去反复修改的,有兴趣自己去找论文看,但论文也就是在那反复调

设置种子和device

def seed_everything(seed):random.seed(seed)os.environ['PYTHONHASHSEED'] = str(seed)np.random.seed(seed)torch.manual_seed(seed)torch.cuda.manual_seed(seed)torch.cuda.manual_seed_all(seed)torch.backends.cudnn.deterministic = Truetorch.backends.cudnn.benchmark = Falseseed_everything(CFG['seed'])
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

添加一些额外的列和读取数据

def expand_contact_id(df):"""Splits out contact_id into seperate columns."""df["game_play"] = df["contact_id"].str[:12]df["step"] = df["contact_id"].str.split("_").str[-3].astype("int")df["nfl_player_id_1"] = df["contact_id"].str.split("_").str[-2]df["nfl_player_id_2"] = df["contact_id"].str.split("_").str[-1]return df
labels = expand_contact_id(pd.read_csv("../input/nfl-player-contact-detection/train_labels.csv"))
train_tracking = pd.read_csv("../input/nfl-player-contact-detection/train_player_tracking.csv")
train_helmets = pd.read_csv("../input/nfl-player-contact-detection/train_baseline_helmets.csv")
train_video_metadata = pd.read_csv("../input/nfl-player-contact-detection/train_video_metadata.csv")

将视频数据转化为图像数据

import subprocess
from tqdm import tqdm# 假设 train_helmets 是一个包含视频文件名的 DataFrame
for video in tqdm(train_helmets.video.unique()):if 'Endzone2' not in video:# 输入视频路径input_path = f'/openbayes/home/train/{video}'# 输出帧路径output_path = f'/openbayes/train/frames/{video}_%04d.jpg'# 构建 ffmpeg 命令command = ['ffmpeg','-i', input_path,  # 输入视频文件'-q:v', '5',       # 设置输出图像质量'-f', 'image2',    # 输出为图像序列output_path,       # 输出图像路径'-hide_banner',    # 隐藏 ffmpeg 的 banner 信息'-loglevel', 'error'  # 只显示错误日志]# 执行命令subprocess.run(command, check=True)

可以自己修改那里的质量,在kaggle上不能训练,要你自己租卡才跑的动

创建一些特征

def create_features(df, tr_tracking, merge_col="step", use_cols=["x_position", "y_position"]):output_cols = []df_combo = (df.astype({"nfl_player_id_1": "str"}).merge(tr_tracking.astype({"nfl_player_id": "str"})[["game_play", merge_col, "nfl_player_id",] + use_cols],left_on=["game_play", merge_col, "nfl_player_id_1"],right_on=["game_play", merge_col, "nfl_player_id"],how="left",).rename(columns={c: c+"_1" for c in use_cols}).drop("nfl_player_id", axis=1).merge(tr_tracking.astype({"nfl_player_id": "str"})[["game_play", merge_col, "nfl_player_id"] + use_cols],left_on=["game_play", merge_col, "nfl_player_id_2"],right_on=["game_play", merge_col, "nfl_player_id"],how="left",).drop("nfl_player_id", axis=1).rename(columns={c: c+"_2" for c in use_cols}).sort_values(["game_play", merge_col, "nfl_player_id_1", "nfl_player_id_2"]).reset_index(drop=True))output_cols += [c+"_1" for c in use_cols]output_cols += [c+"_2" for c in use_cols]if ("x_position" in use_cols) & ("y_position" in use_cols):index = df_combo['x_position_2'].notnull()distance_arr = np.full(len(index), np.nan)tmp_distance_arr = np.sqrt(np.square(df_combo.loc[index, "x_position_1"] - df_combo.loc[index, "x_position_2"])+ np.square(df_combo.loc[index, "y_position_1"]- df_combo.loc[index, "y_position_2"]))distance_arr[index] = tmp_distance_arrdf_combo['distance'] = distance_arroutput_cols += ["distance"]df_combo['G_flug'] = (df_combo['nfl_player_id_2']=="G")output_cols += ["G_flug"]return df_combo, output_colsuse_cols = ['x_position', 'y_position', 'speed', 'distance','direction', 'orientation', 'acceleration', 'sa'
]train, feature_cols = create_features(labels, train_tracking, use_cols=use_cols)

label和train_tracking进行合并,这里的feature_cols后面训练要用到

和视频的频率进行同步,过滤一部分数据

train_filtered = train.query('not distance>2').reset_index(drop=True)
train_filtered['frame'] = (train_filtered['step']/10*59.94+5*59.94).astype('int')+1
train_filtered.head()

视频频率是59.94,而数据集是10,这里将距离过大的pair去除

数据增强

train_aug = A.Compose([A.HorizontalFlip(p=0.5),A.ShiftScaleRotate(p=0.5),A.RandomBrightnessContrast(brightness_limit=(-0.1, 0.1), contrast_limit=(-0.1, 0.1), p=0.5),A.Normalize(mean=[0.], std=[1.]),ToTensorV2()
])valid_aug = A.Compose([A.Normalize(mean=[0.], std=[1.]),ToTensorV2()
])

创建字典

video2helmets = {}
train_helmets_new = train_helmets.set_index('video')
for video in tqdm(train_helmets.video.unique()):video2helmets[video] = train_helmets_new.loc[video].reset_index(drop=True)
video2frames = {}for game_play in tqdm(train_video_metadata.game_play.unique()):for view in ['Endzone', 'Sideline']:video = game_play + f'_{view}.mp4'video2frames[video] = max(list(map(lambda x:int(x.split('_')[-1].split('.')[0]), \glob.glob(f'../train/frames/{video}*'))))

取出视频对应的检测数据和每个视频的最大帧数,检测数据后面用来截取图像用的,最大帧数确保抽取的帧不超过这个范围

数据集

class MyDataset(Dataset):def __init__(self, df, aug=train_aug, mode='train'):self.df = dfself.frame = df.frame.valuesself.feature = df[feature_cols].fillna(-1).valuesself.players = df[['nfl_player_id_1','nfl_player_id_2']].valuesself.game_play = df.game_play.valuesself.aug = augself.mode = modedef __len__(self):return len(self.df)# @lru_cache(1024)# def read_img(self, path):#     return cv2.imread(path, 0)def __getitem__(self, idx):   window = 24frame = self.frame[idx]if self.mode == 'train':frame = frame + random.randint(-6, 6)players = []for p in self.players[idx]:if p == 'G':players.append(p)else:players.append(int(p))imgs = []for view in ['Endzone', 'Sideline']:video = self.game_play[idx] + f'_{view}.mp4'tmp = video2helmets[video]
#             tmp = tmp.query('@frame-@window<=frame<=@frame+@window')tmp[tmp['frame'].between(frame-window, frame+window)]tmp = tmp[tmp.nfl_player_id.isin(players)]#.sort_values(['nfl_player_id', 'frame'])tmp_frames = tmp.frame.valuestmp = tmp.groupby('frame')[['left','width','top','height']].mean()
#0.002sbboxes = []for f in range(frame-window, frame+window+1, 1):if f in tmp_frames:x, w, y, h = tmp.loc[f][['left','width','top','height']]bboxes.append([x, w, y, h])else:bboxes.append([np.nan, np.nan, np.nan, np.nan])bboxes = pd.DataFrame(bboxes).interpolate(limit_direction='both').valuesbboxes = bboxes[::4]if bboxes.sum() > 0:flag = 1else:flag = 0
#0.03sfor i, f in enumerate(range(frame-window, frame+window+1, 4)):img_new = np.zeros((256, 256), dtype=np.float32)if flag == 1 and f <= video2frames[video]:img = cv2.imread(f'../train/frames/{video}_{f:04d}.jpg', 0)x, w, y, h = bboxes[i]img = img[int(y+h/2)-128:int(y+h/2)+128,int(x+w/2)-128:int(x+w/2)+128].copy()img_new[:img.shape[0], :img.shape[1]] = imgimgs.append(img_new)
#0.06sfeature = np.float32(self.feature[idx])img = np.array(imgs).transpose(1, 2, 0)    img = self.aug(image=img)["image"]label = np.float32(self.df.contact.values[idx])return img, feature, label

模型

class Model(nn.Module):def __init__(self):super(Model, self).__init__()self.backbone = timm.create_model(CFG['model'], pretrained=True, num_classes=500, in_chans=13)self.mlp = nn.Sequential(nn.Linear(18, 64),nn.LayerNorm(64),nn.ReLU(),nn.Dropout(0.2),)self.fc = nn.Linear(64+500*2, 1)def forward(self, img, feature):b, c, h, w = img.shapeimg = img.reshape(b*2, c//2, h, w)img = self.backbone(img).reshape(b, -1)feature = self.mlp(feature)y = self.fc(torch.cat([img, feature], dim=1))return y

这里len(feature_cols)是18,所以mlp输入是18,在上面

            for i, f in enumerate(range(frame-window, frame+window+1, 4)):img_new = np.zeros((256, 256), dtype=np.float32)if flag == 1 and f <= video2frames[video]:img = cv2.imread(f'/openbayes/train/frames/{video}_{f:04d}.jpg', 0)x, w, y, h = bboxes[i]img = img[int(y+h/2)-128:int(y+h/2)+128,int(x+w/2)-128:int(x+w/2)+128].copy()img_new[:img.shape[0], :img.shape[1]] = imgimgs.append(img_new)

进行了抽帧,每个视角抽了13帧,两个视角,总计26帧,所以输入通道26,跟之前的比赛一样,也是提供两个视角

for view in ['Endzone', 'Sideline']:

损失函数

model = Model()
model.to(device)
model.train()
import torch.nn as nn
criterion = nn.BCEWithLogitsLoss()

这里用的交叉熵

评估指标

def evaluate(model, loader_val, *, compute_score=True, pbar=None):"""Predict and compute loss and score"""tb = time.time()in_training = model.trainingmodel.eval()loss_sum = 0.0n_sum = 0y_all = []y_pred_all = []if pbar is not None:pbar = tqdm(desc='Predict', nrows=78, total=pbar)total= len(loader_val)for ibatch,(img, feature, label) in tqdm(enumerate(loader_val),total = total):# img, feature, label = [x.to(device) for x in batch]img = img.to(device)feature = feature.to(device)n = label.size(0)label = label.to(device)with torch.no_grad():y_pred = model(img, feature)loss = criterion(y_pred.view(-1), label)n_sum += nloss_sum += n * loss.item()if pbar is not None:pbar.update(len(img))del loss, img, labelgc.collect()loss_val = loss_sum / n_sumret = {'loss': loss_val,'time': time.time() - tb}model.train(in_training) gc.collect()return ret

载入数据,设置学习率计划和优化器

train_set,valid_set = train_test_split(train_filtered,test_size=0.05, random_state=42,stratify = train_filtered['contact'])
train_set = MyDataset(train_set, train_aug, 'train')
train_loader = DataLoader(train_set, batch_size=CFG['train_bs'], shuffle=True, num_workers=12, pin_memory=True,drop_last=True)
valid_set = MyDataset(valid_set, valid_aug, 'test')
valid_loader = DataLoader(valid_set, batch_size=CFG['valid_bs'], shuffle=False, num_workers=12, pin_memory=True)
optimizer = torch.optim.AdamW(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'])
nbatch = len(train_loader)
warmup = CFG['epochs_warmup'] * nbatch
nsteps = CFG['epochs'] * nbatch 
scheduler = CosineLRScheduler(optimizer,warmup_t=warmup, warmup_lr_init=0.0, warmup_prefix=True,t_initial=(nsteps - warmup), lr_min=1e-6)    

开始训练,这里保存整个模型

for iepoch in range(CFG['epochs']):print('Epoch:', iepoch+1)loss_sum = 0.0n_sum = 0total = len(train_loader)# Trainfor ibatch,(img, feature, label) in tqdm(enumerate(train_loader),total = total):img = img.to(device)feature = feature.to(device)n = label.size(0)label = label.to(device)optimizer.zero_grad()y_pred = model(img, feature).squeeze(-1)loss = criterion(y_pred, label)loss_train = loss.item()loss_sum += n * loss_trainn_sum += nloss.backward()grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(),CFG['max_grad_norm'])optimizer.step()scheduler.step(iepoch * nbatch + ibatch + 1)val = evaluate(model, valid_loader)time_val += val['time']loss_train = loss_sum / n_sumdt = (time.time() - tb) / 60print('Epoch: %d Train Loss: %.4f Test Loss: %.4f Time: %.2f min' %(iepoch + 1, loss_train, val['loss'],dt))if val['loss'] < best_loss:best_loss = val['loss']# Save modelofilename = '/openbayes/home/best_model.pt'torch.save(model, ofilename)print(ofilename, 'written')del valgc.collect()dt = time.time() - tb
print(' %.2f min total, %.2f min val' % (dt / 60, time_val / 60))
gc.collect()

只保留权重可能会出现一些bug,保留整个模型比较稳妥

推理部分

这里我用TTA的版本

导入包

import os
import sys
sys.path.append('/kaggle/input/timm-0-6-9/pytorch-image-models-master')
import glob
import numpy as np
import pandas as pd
import random
import math
import gc
import cv2
from tqdm import tqdm
import time
from functools import lru_cache
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import Dataset, DataLoader
from torch.cuda.amp import autocast, GradScaler
import timm
import albumentations as A
from albumentations.pytorch import ToTensorV2
import matplotlib.pyplot as plt
from sklearn.metrics import matthews_corrcoef

数据处理

这里基本和前面一样,我全部放一起了

CFG = {'seed': 42,'model': 'convnext_small.fb_in1k','img_size': 256,'epochs': 10,'train_bs': 100, 'valid_bs': 64,'lr': 1e-3, 'weight_decay': 1e-6,'num_workers': 4
}
def seed_everything(seed):random.seed(seed)os.environ['PYTHONHASHSEED'] = str(seed)np.random.seed(seed)torch.manual_seed(seed)torch.cuda.manual_seed(seed)torch.cuda.manual_seed_all(seed)torch.backends.cudnn.deterministic = Truetorch.backends.cudnn.benchmark = Falseseed_everything(CFG['seed'])
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def expand_contact_id(df):"""Splits out contact_id into seperate columns."""df["game_play"] = df["contact_id"].str[:12]df["step"] = df["contact_id"].str.split("_").str[-3].astype("int")df["nfl_player_id_1"] = df["contact_id"].str.split("_").str[-2]df["nfl_player_id_2"] = df["contact_id"].str.split("_").str[-1]return dflabels = expand_contact_id(pd.read_csv("/kaggle/input/nfl-player-contact-detection/sample_submission.csv"))test_tracking = pd.read_csv("/kaggle/input/nfl-player-contact-detection/test_player_tracking.csv")test_helmets = pd.read_csv("/kaggle/input/nfl-player-contact-detection/test_baseline_helmets.csv")test_video_metadata = pd.read_csv("/kaggle/input/nfl-player-contact-detection/test_video_metadata.csv")
!mkdir -p ../work/framesfor video in tqdm(test_helmets.video.unique()):if 'Endzone2' not in video:!ffmpeg -i /kaggle/input/nfl-player-contact-detection/test/{video} -q:v 2 -f image2 /kaggle/work/frames/{video}_%04d.jpg -hide_banner -loglevel error
def create_features(df, tr_tracking, merge_col="step", use_cols=["x_position", "y_position"]):output_cols = []df_combo = (df.astype({"nfl_player_id_1": "str"}).merge(tr_tracking.astype({"nfl_player_id": "str"})[["game_play", merge_col, "nfl_player_id",] + use_cols],left_on=["game_play", merge_col, "nfl_player_id_1"],right_on=["game_play", merge_col, "nfl_player_id"],how="left",).rename(columns={c: c+"_1" for c in use_cols}).drop("nfl_player_id", axis=1).merge(tr_tracking.astype({"nfl_player_id": "str"})[["game_play", merge_col, "nfl_player_id"] + use_cols],left_on=["game_play", merge_col, "nfl_player_id_2"],right_on=["game_play", merge_col, "nfl_player_id"],how="left",).drop("nfl_player_id", axis=1).rename(columns={c: c+"_2" for c in use_cols}).sort_values(["game_play", merge_col, "nfl_player_id_1", "nfl_player_id_2"]).reset_index(drop=True))output_cols += [c+"_1" for c in use_cols]output_cols += [c+"_2" for c in use_cols]if ("x_position" in use_cols) & ("y_position" in use_cols):index = df_combo['x_position_2'].notnull()distance_arr = np.full(len(index), np.nan)tmp_distance_arr = np.sqrt(np.square(df_combo.loc[index, "x_position_1"] - df_combo.loc[index, "x_position_2"])+ np.square(df_combo.loc[index, "y_position_1"]- df_combo.loc[index, "y_position_2"]))distance_arr[index] = tmp_distance_arrdf_combo['distance'] = distance_arroutput_cols += ["distance"]df_combo['G_flug'] = (df_combo['nfl_player_id_2']=="G")output_cols += ["G_flug"]return df_combo, output_colsuse_cols = ['x_position', 'y_position', 'speed', 'distance','direction', 'orientation', 'acceleration', 'sa'
]test, feature_cols = create_features(labels, test_tracking, use_cols=use_cols)
test
test_filtered = test.query('not distance>2').reset_index(drop=True)
test_filtered['frame'] = (test_filtered['step']/10*59.94+5*59.94).astype('int')+1
test_filtered
del test, labels, test_tracking
gc.collect()
train_aug = A.Compose([A.HorizontalFlip(p=0.5),A.ShiftScaleRotate(p=0.5),A.RandomBrightnessContrast(brightness_limit=(-0.1, 0.1), contrast_limit=(-0.1, 0.1), p=0.5),A.Normalize(mean=[0.], std=[1.]),ToTensorV2()
])valid_aug = A.Compose([A.Normalize(mean=[0.], std=[1.]),ToTensorV2()
])
video2helmets = {}
test_helmets_new = test_helmets.set_index('video')
for video in tqdm(test_helmets.video.unique()):video2helmets[video] = test_helmets_new.loc[video].reset_index(drop=True)del test_helmets, test_helmets_new
gc.collect()
video2frames = {}for game_play in tqdm(test_video_metadata.game_play.unique()):for view in ['Endzone', 'Sideline']:video = game_play + f'_{view}.mp4'video2frames[video] = max(list(map(lambda x:int(x.split('_')[-1].split('.')[0]), \glob.glob(f'/kaggle/work/frames/{video}*'))))
class MyDataset(Dataset):def __init__(self, df, aug=valid_aug, mode='train'):self.df = dfself.frame = df.frame.valuesself.feature = df[feature_cols].fillna(-1).valuesself.players = df[['nfl_player_id_1','nfl_player_id_2']].valuesself.game_play = df.game_play.valuesself.aug = augself.mode = modedef __len__(self):return len(self.df)# @lru_cache(1024)# def read_img(self, path):#     return cv2.imread(path, 0)def __getitem__(self, idx):   window = 24frame = self.frame[idx]if self.mode == 'train':frame = frame + random.randint(-6, 6)players = []for p in self.players[idx]:if p == 'G':players.append(p)else:players.append(int(p))imgs = []for view in ['Endzone', 'Sideline']:video = self.game_play[idx] + f'_{view}.mp4'tmp = video2helmets[video]
#             tmp = tmp.query('@frame-@window<=frame<=@frame+@window')tmp[tmp['frame'].between(frame-window, frame+window)]tmp = tmp[tmp.nfl_player_id.isin(players)]#.sort_values(['nfl_player_id', 'frame'])tmp_frames = tmp.frame.valuestmp = tmp.groupby('frame')[['left','width','top','height']].mean()
#0.002sbboxes = []for f in range(frame-window, frame+window+1, 1):if f in tmp_frames:x, w, y, h = tmp.loc[f][['left','width','top','height']]bboxes.append([x, w, y, h])else:bboxes.append([np.nan, np.nan, np.nan, np.nan])bboxes = pd.DataFrame(bboxes).interpolate(limit_direction='both').valuesbboxes = bboxes[::4]if bboxes.sum() > 0:flag = 1else:flag = 0
#0.03sfor i, f in enumerate(range(frame-window, frame+window+1, 4)):img_new = np.zeros((256, 256), dtype=np.float32)if flag == 1 and f <= video2frames[video]:img = cv2.imread(f'/kaggle/work/frames/{video}_{f:04d}.jpg', 0)x, w, y, h = bboxes[i]img = img[int(y+h/2)-128:int(y+h/2)+128,int(x+w/2)-128:int(x+w/2)+128].copy()img_new[:img.shape[0], :img.shape[1]] = imgimgs.append(img_new)
#0.06sfeature = np.float32(self.feature[idx])img = np.array(imgs).transpose(1, 2, 0)    img = self.aug(image=img)["image"]label = np.float32(self.df.contact.values[idx])return img, feature, label

查看截取出来的图片

img, feature, label = MyDataset(test_filtered, valid_aug, 'test')[0]
plt.imshow(img.permute(1,2,0)[:,:,7])
plt.show()
img.shape, feature, label

进行推理

class Model(nn.Module):def __init__(self):super(Model, self).__init__()self.backbone = timm.create_model(CFG['model'], pretrained=False, num_classes=500, in_chans=13)self.mlp = nn.Sequential(nn.Linear(18, 64),nn.LayerNorm(64),nn.ReLU(),nn.Dropout(0.2),# nn.Linear(64, 64),# nn.LayerNorm(64),# nn.ReLU(),# nn.Dropout(0.2))self.fc = nn.Linear(64+500*2, 1)def forward(self, img, feature):b, c, h, w = img.shapeimg = img.reshape(b*2, c//2, h, w)img = self.backbone(img).reshape(b, -1)feature = self.mlp(feature)y = self.fc(torch.cat([img, feature], dim=1))return y
test_set = MyDataset(test_filtered, valid_aug, 'test')
test_loader = DataLoader(test_set, batch_size=CFG['valid_bs'], shuffle=False, num_workers=CFG['num_workers'], pin_memory=True)model = Model().to(device)
model = torch.load('/kaggle/input/track-weight/best_model.pt')model.eval()y_pred = []
with torch.no_grad():tk = tqdm(test_loader, total=len(test_loader))for step, batch in enumerate(tk):if(step % 4 != 3):img, feature, label = [x.to(device) for x in batch]output1 = model(img, feature).squeeze(-1)output2 = model(img.flip(-1), feature).squeeze(-1)y_pred.extend(0.2*(output1.sigmoid().cpu().numpy()) + 0.8*(output2.sigmoid().cpu().numpy()))else:img, feature, label = [x.to(device) for x in batch]output = model(img.flip(-1), feature).squeeze(-1)y_pred.extend(output.sigmoid().cpu().numpy())    y_pred = np.array(y_pred)

这里用了翻转,tta算是一种隐式模型集成

提交

th = 0.29test_filtered['contact'] = (y_pred >= th).astype('int')sub = pd.read_csv('/kaggle/input/nfl-player-contact-detection/sample_submission.csv')sub = sub.drop("contact", axis=1).merge(test_filtered[['contact_id', 'contact']], how='left', on='contact_id')
sub['contact'] = sub['contact'].fillna(0).astype('int')sub[["contact_id", "contact"]].to_csv("submission.csv", index=False)sub.head()

推理代码链接和成绩

infer_code | Kaggle

修改版本

之前的,效果不是很好,我还是换成resnet50进行训练,结果如下,链接和权重如下

infer_code | Kaggle

best_weight | Kaggle

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/894896.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

低成本训练的突破与争议:DeepSeek R1模型的新进展

摘要 近日&#xff0c;李飞飞团队宣称以50美元成本训练出性能超越o1/R1的DeepSeek R1模型&#xff0c;此说法引发广泛质疑。与此同时&#xff0c;上海交通大学本科生提出一种新的低成本推理方法&#xff0c;可能成为新热门选择。有观点认为&#xff0c;若认可50美元能训练出更优…

Sentinel的安装和做限流的使用

一、安装 Release v1.8.3 alibaba/Sentinel GitHubA powerful flow control component enabling reliability, resilience and monitoring for microservices. (面向云原生微服务的高可用流控防护组件) - Release v1.8.3 alibaba/Sentinelhttps://github.com/alibaba/Senti…

“AI隐患识别系统,安全多了道“智能护盾”

家人们&#xff0c;在生活和工作里&#xff0c;咱们都知道安全那可是头等大事。不管是走在马路上&#xff0c;还是在工厂车间忙碌&#xff0c;又或是住在高楼大厦里&#xff0c;身边都可能藏着一些安全隐患。以前&#xff0c;发现这些隐患大多靠咱们的眼睛和经验&#xff0c;可…

《手札·避坑篇》信息化和数字化的本质区别

信息化与数字化&#xff1a;轴承贸易公司的转型之路 在当今商业环境中&#xff0c;信息化和数字化是企业转型的两个热门词汇。但对于很多外行人来说&#xff0c;这两个概念可能容易混淆。本文将从轴承贸易公司的角度&#xff0c;结合真实案例和数据&#xff0c;分析信息化与数字…

基于DeepSeek API和VSCode的自动化网页生成流程

1.创建API key 访问官网DeepSeek &#xff0c;点击API开放平台。 在开放平台界面左侧点击API keys&#xff0c;进入API keys管理界面&#xff0c;点击创建API key按钮创建API key&#xff0c;名称自定义。 2.下载并安装配置编辑器VSCode 官网Visual Studio Code - Code Editing…

SolidWorks教程P2.2【草图 | 第二节】——草图几何关系与编辑

草图几何关系包括&#xff1a;重合、中点、相切、平行、相等、共线、对称 草图编辑功能包括&#xff1a;裁剪实体、转换实体引用、等距实体 目录 1.草图几何关系 2.裁剪实体 3.转换实体引用 4.等距实体 补充知识&#xff1a;智能尺寸 1.草图几何关系 在之前的草图介绍里…

AI大模型训练实战:分布式与微调指南

AI大模型训练实战:分布式与微调指南 适用人群:有一定深度学习基础,正在或即将参与大模型(如 GPT、DeepSeek 等)训练与部署的工程师、研究者;想要理解分布式策略与微调方法的读者。 一、大模型为何需要分布式与微调? 随着 GPT、DeepSeek 等大模型参数规模攀升至数十亿甚…

【梦想终会实现】Linux驱动学习5

加油加油坚持住&#xff01; 1、 Linux驱动模型&#xff1a;驱动模型即将各模型中共有的部分抽象成C结构体。Linux2.4版本前无驱动模型的概念&#xff0c;每个驱动写的代码因人而异&#xff0c;随后为规范书写方式&#xff0c;发明了驱动模型&#xff0c;即提取公共信息组成一…

WARNING(ORCAP-1589): Net has two or more aliases - possible short?

参考链接&#xff1a;ORCAD报错ORCAP-1589-CSDN博客 现象&#xff1a; Capture CIS 使用PCB-DRC检查原理图&#xff0c;报错Net has two or more aliases - possible short? 错误原因&#xff1a; 一个网络有两个网络名称。 问题本质&#xff1a; 原理图管脚型号的设定问题…

nvm:node 版本管理器

一、先安装git Git 安装完成后执行 git --version查看版本号是否安装成功 二、安装nvm &#xff08;参考链接&#xff1a;mac 安装nvm详细教程 - 简书&#xff09; 官网&#xff08;https://github.com/nvm-sh/nvm/blob/master/README.md&#xff09;查看最新版本安装命令 …

动态规划——路径问题①

文章目录 62. 不同路径算法原理代码实现 63. 不同路径 II算法原理代码实现 LCR 166. 珠宝的最高价值算法原理代码实现 62. 不同路径 题目链接&#xff1a;62. 不同路径 算法原理 状态表示&#xff1a; dp[i,j]&#xff1a;以[i, j]位置为结尾&#xff0c;走到[i, j]位置有多少…

NodeList 对象

NodeList 对象 概述 NodeList 对象是 DOM(文档对象模型)中的一种特殊类型,它代表了文档中一组元素的集合。NodeList 对象通常通过查询 DOM 树来获取,例如使用 document.querySelectorAll() 方法。NodeList 对象在 JavaScript 中非常有用,因为它允许开发者以编程方式遍历…

C++自研3D教程OPENGL版本---动态批处理的基本实现

又开始找工作了&#xff0c;借机休息出去旅行两个月&#xff0c;顺便利用这段时间整理下以前写的东西。 以下是一个简单的动态批处理实现&#xff1a; #include <GL/glew.h> #include <GLFW/glfw3.h> #include <iostream> #include <vector>// 顶点结…

61. Linux内核启动流程简介

一、vmlinux.lds简介 从arch/arm/kernel/vmlinux.lds分析Linux内核第一行启动代码。找到ENTRY(stext) 入口函数是stext&#xff0c;image和zImage是经过压缩的&#xff0c;Linux内核会先进行解压缩&#xff0c;解压缩完成以后就要运行Linux内核。要求&#xff1a; 1、MMU关闭 …

汽车智能座舱的技术演进与用户体验重构 —— 基于多模态交互与 AI 融合的范式创新

摘要&#xff1a; 汽车智能座舱作为人 - 车 - 环境交互的核心载体&#xff0c;正经历从功能驱动到体验驱动的范式变革。本文通过技术解构与用户行为分析&#xff0c;深入揭示智能座舱在异构计算、多模态感知、服务生态等维度的创新路径。研究表明&#xff0c;智能座舱的竞争焦…

使用 Let‘s Encrypt 和 OpenResty 实现域名转发与 SSL 配置

在搭建网站或服务时&#xff0c;确保域名的安全性和正确的流量转发是非常重要的。本文将介绍如何使用 Let’s Encrypt 获取免费的 SSL 证书&#xff0c;并将其配置到 OpenResty 中&#xff0c;同时实现特定的域名转发规则。这不仅可以提升网站的安全性&#xff0c;还能优化流量…

SpringBoot3整合Swagger3时出现Type javax.servlet.http.HttpServletRequest not present错误

目录 错误详情 错误原因 解决方法 引入依赖 修改配置信息 创建文件 访问 错误详情 错误原因 SpringBoot3和Swagger3版本不匹配 解决方法 使用springdoc替代springfox&#xff0c;具体步骤如下&#xff1a; 引入依赖 在pom.xml文件中添加如下依赖&#xff1a; <…

ChatGPT提问技巧:行业热门应用提示词案例-文案写作

ChatGPT 作为强大的 AI 语言模型&#xff0c;已经成为文案写作的得力助手。但要让它写出真正符合你需求的文案&#xff0c;关键在于如何与它“沟通”&#xff0c;也就是如何设计提示词&#xff08;Prompt&#xff09;。以下是一些实用的提示词案例&#xff0c;帮助你解锁 ChatG…

供排水水工公司开展企业获得用水营商环境满意度调查

为了持续提升企业的供水服务品质&#xff0c;进一步优化当地的营商环境&#xff0c;深圳市供排水公司水工公司紧密结合其实际工作情况&#xff0c;特别委托民安智库开展了2023年度优化营商环境调查专项工作。该项目的核心目的是深入了解并评估市各类获得用水企业的用水环境满意…

【Elasticsearch】分桶聚合功能概述

这些聚合功能可以根据它们的作用和应用场景分为几大类&#xff0c;以下是分类后的结果&#xff1a; 1.基础聚合&#xff08;Basic Aggregations&#xff09; • Terms&#xff08;字段聚合&#xff09; 根据字段值对数据进行分组并统计。 例子&#xff1a;按产品类别统计销…