佛山高端网站建设报价页游在线玩

bicheng/2026/1/17 10:22:01/文章来源:
佛山高端网站建设报价,页游在线玩,平东网站建设,网站建设衡水【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解 文章目录 【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解前言Inception-ResNet讲解Inception-ResNet-V1Inception-ResNet-V2残差模块的缩放(Scaling of the Residuals)Inception-…【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解 文章目录 【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解前言Inception-ResNet讲解Inception-ResNet-V1Inception-ResNet-V2残差模块的缩放(Scaling of the Residuals)Inception-ResNet的总体模型结构 GoogLeNet(Inception-ResNet) Pytorch代码## Inception-ResNet-V1Inception-ResNet-V2 完整代码Inception-ResNet-V1Inception-ResNet-V2 总结 前言 GoogLeNet(Inception-ResNet)是由谷歌的Szegedy, Christian等人在《Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning【AAAI-2017】》【论文地址】一文中提出的改进模型受启发于ResNet【参考】在深度网络上较好的表现影响论文将残差连接加入到Inception结构中形成2个Inception-ResNet版本的网络它将残差连接取代原本Inception块中池化层部分并将拼接变成了求和相加提升了Inception的训练速度。 因为InceptionV4、Inception-Resnet-v1和Inception-Resnet-v2同出自一篇论文大部分读者对InceptionV4存在误解认为它是Inception模块与残差学习的结合其实InceptionV4没有使用残差学习的思想它基本延续了Inception v2/v3的结构只有Inception-Resnet-v1和Inception-Resnet-v2才是Inception模块与残差学习的结合产物。 Inception-ResNet讲解 Inception-ResNet的核心思想是将Inception模块和ResNet模块进行融合以利用它们各自的优点。Inception模块通过并行多个不同大小的卷积核来捕捉多尺度的特征而ResNet模块通过残差连接解决了深层网络中的梯度消失和梯度爆炸问题有助于更好地训练深层模型。Inception-ResNet使用了与InceptionV4【参考】类似的Inception模块并在其中引入了ResNet的残差连接。这样网络中的每个Inception模块都包含了两个分支一个是常规的Inception结构另一个是包含残差连接的Inception结构。这种设计使得模型可以更好地学习特征表示并且在训练过程中可以更有效地传播梯度。 Inception-ResNet-V1 Inception-ResNet-v1一种和InceptionV3【参考】具有相同计算损耗的结构。 Stem结构 Inception-ResNet-V1的Stem结构类似于此前的InceptionV3网络中Inception结构组之前的网络层。 所有卷积中没有标记为V表示填充方式为SAME Padding输入和输出维度一致标记为V表示填充方式为VALID Padding输出维度视具体情况而定。 Inception-resnet-A结构 InceptionV4网络中Inception-A结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet结构残差连接代替了Inception中的池化层并用残差连接相加操作取代了原Inception块中的拼接操作。 Inception-resnet-B结构 InceptionV4网络中Inception-B结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet-C结构 InceptionV4网络中Inception-C结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Redution-A结构 与InceptionV4网络中Redution-A结构一致区别在于卷积核的个数。 k和l表示卷积个数不同网络结构的redution-A结构k和l是不同的。 Redution-B结构 . Inception-ResNet-V2 Inception-ResNet-v2这是一种和InceptionV4具有相同计算损耗的结构但是训练速度要比纯Inception-v4要快。 Inception-ResNet-v2的整体框架和Inception-ResNet-v1的一致除了Inception-ResNet-v2的stem结构与Inception V4的相同其他的的结构Inception-ResNet-v2与Inception-ResNet-v1的类似只不过卷积的个数Inception-ResNet-v2数量更多。 Stem结构 Inception-ResNet-v2的stem结构与Inception V4的相同。 Inception-resnet-A结构 InceptionV4网络中Inception-A结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet-B结构 InceptionV4网络中Inception-B结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet-C结构 InceptionV4网络中Inception-C结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Redution-A结构 与InceptionV4网络中Redution-A结构一致区别在于卷积核的个数。 k和l表示卷积个数不同网络结构的redution-A结构k和l是不同的。 Redution-B结构 残差模块的缩放(Scaling of the Residuals) 如果单个网络层卷积核数量过多(超过1000)残差网络开始出现不稳定网络会在训练过程早期便会开始失效—经过几万次训练后平均池化层之前的层开始只输出0。降低学习率、增加额外的BN层都无法避免这种状况。因此在将shortcut分支加到当前残差块的输出之前对残差块的输出进行放缩能够稳定训练 通常将残差放缩因子定在0.1到0.3之间去缩放残差块输出。即使缩放并不是完全必须的它似乎并不会影响最终的准确率但是放缩能有益于训练的稳定性。 Inception-ResNet的总体模型结构 下图是原论文给出的关于 Inception-ResNet-V1模型结构的详细示意图 下图是原论文给出的关于 Inception-ResNet-V2模型结构的详细示意图 读者注意了,原始论文标注的 Inception-ResNet-V2通道数有一部分是错的写代码时候对应不上。 两个版本的总体结构相同具体的Stem、Inception块、Redution块则稍微不同。 Inception-ResNet-V1和 Inception-ResNet-V2在图像分类中分为两部分backbone部分 主要由 Inception-resnet模块、Stem模块和池化层(汇聚层)组成分类器部分由全连接层组成。 GoogLeNet(Inception-ResNet) Pytorch代码 ## Inception-ResNet-V1 卷积层组 卷积层BN层激活函数 # 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return xStem模块 卷积层组池化层 # Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3x3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)# conv1*1(80)self.conv5 BasicConv2d(64, 80, kernel_size1)# conv3*3(192 valid)self.conv6 BasicConv2d(80, 192, kernel_size1)# conv3*3(256 stride2 valid)self.conv7 BasicConv2d(192, 256, kernel_size3, stride2)def forward(self, x):x self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x self.conv7(self.conv6(self.conv5(x)))return xInception_ResNet-A模块 卷积层组池化层 # Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(32)conv3*3(32)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(256)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-B模块 卷积层组池化层 # Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(128)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(128)conv1*7(128)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(896)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-C模块 卷积层组池化层 # Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(192)conv3*1(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(1792)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_resredutionA模块 卷积层组池化层 # redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)redutionB模块 卷积层组池化层 # redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(256 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(256)conv3x3(256 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)Inception-ResNet-V2 Inception-ResNet-V2除了Stem其他模块在结构上与Inception-ResNet-V1一致。 卷积层组 卷积层BN层激活函数 # 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return xStem模块 卷积层组池化层 # Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3*3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid) conv3*3(96 stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)self.conv4 BasicConv2d(64, 96, kernel_size3, stride2)# conv1*1(64)conv3*3(96 valid)self.conv5_1_1 BasicConv2d(160, 64, kernel_size1)self.conv5_1_2 BasicConv2d(64, 96, kernel_size3)# conv1*1(64)conv7*1(64)conv1*7(64)conv3*3(96 valid)self.conv5_2_1 BasicConv2d(160, 64, kernel_size1)self.conv5_2_2 BasicConv2d(64, 64, kernel_size(7, 1), padding(3, 0))self.conv5_2_3 BasicConv2d(64, 64, kernel_size(1, 7), padding(0, 3))self.conv5_2_4 BasicConv2d(64, 96, kernel_size3)# conv3*3(192 valid) maxpool3*3(stride2 valid)self.conv6 BasicConv2d(192, 192, kernel_size3, stride2)self.maxpool6 nn.MaxPool2d(kernel_size3, stride2)def forward(self, x):x1_1 self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x1_2 self.conv4(self.conv3(self.conv2(self.conv1(x))))x1 torch.cat([x1_1, x1_2], 1)x2_1 self.conv5_1_2(self.conv5_1_1(x1))x2_2 self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))x2 torch.cat([x2_1, x2_2], 1)x3_1 self.conv6(x2)x3_2 self.maxpool6(x2)x3 torch.cat([x3_1, x3_2], 1)return x3Inception_ResNet-A模块 卷积层组池化层 # Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(48)conv3*3(64)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(384)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-B模块 卷积层组池化层 # Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(160)conv1*7(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(1154)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-C模块 卷积层组池化层 # Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(224)conv3*1(256)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(2048)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_resredutionA模块 卷积层组池化层 # redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)redutionB模块 卷积层组池化层 # redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(288 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(288)conv3x3(320 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)完整代码 Inception-ResNet的输入图像尺寸是299×299 Inception-ResNet-V1 import torch import torch.nn as nn from torchsummary import summary# 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return x# Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3x3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)# conv1*1(80)self.conv5 BasicConv2d(64, 80, kernel_size1)# conv3*3(192 valid)self.conv6 BasicConv2d(80, 192, kernel_size1)# conv3*3(256 stride2 valid)self.conv7 BasicConv2d(192, 256, kernel_size3, stride2)def forward(self, x):x self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x self.conv7(self.conv6(self.conv5(x)))return x# Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(32)conv3*3(32)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(256)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(128)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(128)conv1*7(128)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(896)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(192)conv3*1(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(1792)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_res# redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)# redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(256 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(256)conv3x3(256 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)class Inception_ResNetv1(nn.Module):def __init__(self, num_classes 1000, k192, l192, m256, n384):super(Inception_ResNetv1, self).__init__()blocks []blocks.append(Stem(3))for i in range(5):blocks.append(Inception_ResNet_A(256,32, 32, 32, 32, 32, 32, 256, 0.17))blocks.append(redutionA(256, k, l, m, n))for i in range(10):blocks.append(Inception_ResNet_B(896, 128, 128, 128, 128, 896, 0.10))blocks.append(redutionB(896,256, 384, 256, 256, 256))for i in range(4):blocks.append(Inception_ResNet_C(1792,192, 192, 192, 192, 1792, 0.20))blocks.append(Inception_ResNet_C(1792, 192, 192, 192, 192, 1792, activationFalse))self.features nn.Sequential(*blocks)self.conv BasicConv2d(1792, 1536, 1)self.global_average_pooling nn.AdaptiveAvgPool2d((1, 1))self.dropout nn.Dropout(0.8)self.linear nn.Linear(1536, num_classes)def forward(self, x):x self.features(x)x self.conv(x)x self.global_average_pooling(x)x x.view(x.size(0), -1)x self.dropout(x)x self.linear(x)return xif __name__ __main__:device torch.device(cuda:0 if torch.cuda.is_available() else cpu)model Inception_ResNetv1().to(device)summary(model, input_size(3, 229, 229))summary可以打印网络结构和参数方便查看搭建好的网络结构。 Inception-ResNet-V2 import torch import torch.nn as nn from torchsummary import summary# 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return x# Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3*3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid) conv3*3(96 stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)self.conv4 BasicConv2d(64, 96, kernel_size3, stride2)# conv1*1(64)conv3*3(96 valid)self.conv5_1_1 BasicConv2d(160, 64, kernel_size1)self.conv5_1_2 BasicConv2d(64, 96, kernel_size3)# conv1*1(64)conv7*1(64)conv1*7(64)conv3*3(96 valid)self.conv5_2_1 BasicConv2d(160, 64, kernel_size1)self.conv5_2_2 BasicConv2d(64, 64, kernel_size(7, 1), padding(3, 0))self.conv5_2_3 BasicConv2d(64, 64, kernel_size(1, 7), padding(0, 3))self.conv5_2_4 BasicConv2d(64, 96, kernel_size3)# conv3*3(192 valid) maxpool3*3(stride2 valid)self.conv6 BasicConv2d(192, 192, kernel_size3, stride2)self.maxpool6 nn.MaxPool2d(kernel_size3, stride2)def forward(self, x):x1_1 self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x1_2 self.conv4(self.conv3(self.conv2(self.conv1(x))))x1 torch.cat([x1_1, x1_2], 1)x2_1 self.conv5_1_2(self.conv5_1_1(x1))x2_2 self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))x2 torch.cat([x2_1, x2_2], 1)x3_1 self.conv6(x2)x3_2 self.maxpool6(x2)x3 torch.cat([x3_1, x3_2], 1)return x3# Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(48)conv3*3(64)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(384)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(160)conv1*7(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(1154)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(224)conv3*1(256)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(2048)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_res# redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)# redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(288 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(288)conv3x3(320 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)class Inception_ResNetv2(nn.Module):def __init__(self, num_classes 1000, k256, l256, m384, n384):super(Inception_ResNetv2, self).__init__()blocks []blocks.append(Stem(3))for i in range(5):blocks.append(Inception_ResNet_A(384,32, 32, 32, 32, 48, 64, 384, 0.17))blocks.append(redutionA(384, k, l, m, n))for i in range(10):blocks.append(Inception_ResNet_B(1152, 192, 128, 160, 192, 1152, 0.10))blocks.append(redutionB(1152, 256, 384, 288, 288, 320))for i in range(4):blocks.append(Inception_ResNet_C(2144,192, 192, 224, 256, 2144, 0.20))blocks.append(Inception_ResNet_C(2144, 192, 192, 224, 256, 2144, activationFalse))self.features nn.Sequential(*blocks)self.conv BasicConv2d(2144, 1536, 1)self.global_average_pooling nn.AdaptiveAvgPool2d((1, 1))self.dropout nn.Dropout(0.8)self.linear nn.Linear(1536, num_classes)def forward(self, x):x self.features(x)x self.conv(x)x self.global_average_pooling(x)x x.view(x.size(0), -1)x self.dropout(x)x self.linear(x)return xif __name__ __main__:device torch.device(cuda:0 if torch.cuda.is_available() else cpu)model Inception_ResNetv2().to(device)summary(model, input_size(3, 229, 229))summary可以打印网络结构和参数方便查看搭建好的网络结构。 总结 尽可能简单、详细的介绍了Inception-ResNet将Inception和ResNet结合的作用和过程讲解了Inception-ResNet模型的结构和pytorch代码。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/89609.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

长沙品质网站建设优点马鞍山集团网站建设

目录 一、尺寸适配解决方案 1.vw和vh 2.scale 二、数据大屏顶部搭建 1.思路分析

网站开发遵循谷歌外贸

前言:大部分时候我们都是使用pip install去安装一些第三方库,但是偶尔也会有部分库无法安装(最典型的就是dlib这个库),需要采取别的方法解决,这里做笔记记录一下。 使用国内镜像源安装 因为pypi的服务器在…

浦口建设局网站突发 佛山出大事

1.选择按键触发方式 下降沿 2.解决消抖的方法 1)ARM中:延时消抖 2)linux驱动开发:定时器函数 3.框图 内部流程框图: 需要RCC GPIO EXTI GIC章节 中断触发流程: 4.RCC 章节 1)使能GPIOF组 …

金融街做网站的公司建设部网站1667号公告

将图片转为 PDF 的主要原因之一是为了方便共享和传输。此外,将多张图片合并成一个 PDF 文件还可以简化文件管理。之前文章详细介绍过如何使用第三方库Spire.PDF for Python将PDF文件转为图片,那么本文介绍使用同样工具在Python中实现图片转PDF文件的功能…

网站开发框架的工具计算机多媒体辅助教学网站开发

本篇记录了http伪造本地用户的多条字段,便于快速解决题目 用法举例: 直接把伪造本地用户的多个字段复制到请求头中,光速解决部分字段被过滤的问题。 Client-IP: 127.0.0.1 Forwarded-For-Ip: 127.0.0.1 Forwarded-For: 127.0.0.1 Forwarded…

做博客网站最好用什么系统字体 安装到wordpress

Java中的synchronized关键字 synchronized关键字介绍 synchronized块是Java提供的一种原子性内置锁,Java中的每个对象都可以把它当作一个同步锁来使用,这些Java内置的使用者看不到的锁被称为内部锁,也叫作监视器锁。 线程的执行代码在进入…

模板手机网站建设公司wordpress设置用户注册

HEG其实可以批处理可以看我另外一篇博文,不需要写代码。但是对于300以上数量的MODIS影像非常容易自动停止,而且越来越慢。还是打算利用python每个调用。 只提取了MODIS数据当中的IST一个波段,输出成tif,其他什么都没做。在处理前…

青岛开发区网站建设多少钱旅游网站开发公司

作者:ZadigX 企业发布现状痛点 目前企业在选择和实施发布策略时面临以下困境: 1. 缺乏云原生能力: 由于从传统部署转变为云原生模式后,技术架构改造需要具备相关能力的人才。这使得企业在发布策略方面难以入手。 2. 缺乏自动化…

云南照明网站建设万网云主机 wordpress

目录 一、目标1:使用函数分割 二、目标2:使用函数模块 三、目标3:使用正则匹配 一、目标1:使用函数分割 目标:x.x.x.x[中国北京 xx云] 方法:split函数replace函数 1、分割:使用split()方法将…

门户网站建设意义六安做网站的

重点: 1.QPainter在QWidget窗口的paintEvent中使用。 2.QPainter通常涉及到设置画笔、设置画刷、绘图(QPen、QBrush、drawxx)三个流程。 class Widget : public QWidget {Q_OBJECTprotected:void paintEvent(QPaintEvent *event) Q_DEC…

网站建设 东道网络郑州新动力网络技术是干嘛的

目录 同余 一、试题 算法训练 同余方程 同余 同余使人们能够用等式的形式简洁地描述整除关系同余:若 m(正整数),a 和 b 是整数,a%mb%m,或(a-b)%m0,记为 a b(mod m)求解一元线性同余方程等价于…

网吧网站怎么做如何创建一个微信小程序

这 题查保护的时候吓了一跳,保护全开。脑子飞速旋转是要我绕过canary,PIE然后再利用栈溢出劫持程序流吗: 然后扔进IDA中查看下大致流程: 大致看出var是个数组,当var[13]17的时候就会得到system。那还不简单直接写payload: from p…

网站集成微信登录vip影院自助建站系统

随着企业网络需求的不断增长,组织发现监控和管理其网络基础设施变得越来越困难,网络管理员正在转向其他工具和资源,这些工具和资源可以使他们的工作更轻松一些,尤其是在故障排除方面。 目前,网络管理员主要使用简单、…

有没有淄博张店做兼职工作的网站小红书推广软件

文章目录 安装和编译g2o使用g2o拟合曲线主函数代码CMakeLists.txt编译和运行运行结果安装和编译g2o git clone -b 20200410_git https://github.com/RainerKuemmerle/g2o.git cd g2o mkdir build && cd build && cmake .. &&

什么网站都能打开的浏览器建设网站策划书

安装Go语言 下载Go: 访问Go的官方网站(https://golang.org/dl/)。根据你的操作系统(Windows、Linux、macOS等)选择合适的安装包进行下载。 安装Go: 对于Windows用户,运行下载的.msi文件&#x…

顺企网吉安网站建设东莞短视频推广哪个平台好

目录 往期精彩内容: 前言 1 二次分解与数据集制作 1.1 导入数据 1.2 VMD分解 1.3 样本熵 1.4 CEEMDAN分解 1.5 数据集制作 2 基于Pytorch的 CNN-LSTM 预测模型 2.1 定义CNN-LSTM预测模型 2.2 设置参数,训练模型 3 模型评估与可视化 3.1 结果…

建设网站公司选哪家好网站建设时怎么附加数据库

分享15个鲜为人知的的小众网站,每一个可以让你打开新世界的大门,让你震惊。 1:仿知网 https://www.cn-ki.net/ 仿知网是一个完全可以代替知网的精品网站;是一个非常强大的论文搜索网站。 首先这个网站的论文检索结果和知网的搜索结…

慕课网站建设开题报告厦门建设厅网站

目录 一、什么是CAP? Consistency (一致性): Availability (可用性): Partition Tolerance (分区容错性): 二、取舍策略 三、Base理论 1、基本可用 2、软状态 3、最终一致性 四、常见产品 Ereka Zookeeper 五、总结 一、什么是CAP&#xf…

南昌网站建设利润热度网络网站建设

GitHub 操作:同步 Fork 来的仓库(上游仓库)_sigmarising的博客-CSDN博客 1. 设置upstream 2. git pull --rebase 3. 然后再执行pull、push操作

获取排名无锡seo报价

DOM 是以树状结构排列的,所以父子关系是相对的,当li为我们的目标节点的时候,ul为其父节点,其他li为它的兄弟节点,li里面包含的标签为子节点,以此类推。 那我们如何找父节点? 元素.parentNode&am…