网站建设客网站商标注册号在哪个位置

pingmian/2025/10/14 21:14:51/文章来源:
网站建设客网站,商标注册号在哪个位置,服装网站开发的需求分析,塑胶原料东莞网站建设技术支持一、本文介绍 本文给大家带来的改进机制是HAttention注意力机制#xff0c;混合注意力变换器#xff08;HAT#xff09;的设计理念是通过融合通道注意力和自注意力机制来提升单图像超分辨率重建的性能。通道注意力关注于识别哪些通道更重要#xff0c;而自注意力则关注于图…一、本文介绍 本文给大家带来的改进机制是HAttention注意力机制混合注意力变换器HAT的设计理念是通过融合通道注意力和自注意力机制来提升单图像超分辨率重建的性能。通道注意力关注于识别哪些通道更重要而自注意力则关注于图像内部各个位置之间的关系。HAT利用这两种注意力机制有效地整合了全局的像素信息从而提供更为精确的结果(这个注意力机制挺复杂的光代码就700行)但是效果挺好的也是10月份最新的成果非常适合添加到大家自己的论文中。 推荐指数⭐⭐⭐⭐⭐(最新的改进机制) 专栏回顾YOLOv8改进系列专栏——本专栏持续复习各种顶会内容——科研必备  效果回顾展示- 目录 一、本文介绍 二、HAttention框架原理  2.1 混合注意力变换器HAT的引入 三、HAttention的核心代码 四、手把手教你添加HAttention机制  修改一 修改二  五、HAttention的yaml文件 5.1 HAttention的yaml文件一 5.2 HAttention的yaml文件二 5.3 推荐HAttention可添加的位置  5.4 HAttention的训练过程截图  五、本文总结 二、HAttention框架原理  官方论文地址官方论文地址 官方代码地址官方代码地址 这篇论文提出了一种新的混合注意力变换器Hybrid Attention Transformer, HAT用于单图像超分辨率重建。HAT结合了通道注意力和自注意力以激活更多像素以进行高分辨率重建。此外作者还提出了一个重叠交叉注意模块来增强跨窗口信息的交互。论文还引入了一种同任务预训练策略以进一步发掘HAT的潜力。通过广泛的实验论文展示了所提出模块和预训练策略的有效性其方法在定量和定性方面显著优于现有的最先进方法。  这篇论文的创新点主要包括 1. 混合注意力变换器HAT的引入它结合了通道注意力和自注意力机制以改善单图像超分辨率重建。 2.重叠交叉注意模块这一模块用于增强跨窗口信息的交互以进一步提升超分辨率重建的性能。 3.同任务预训练策略作者提出了一种新的预训练方法专门针对HAT以充分利用其潜力。 这些创新点使得所提出的方法在超分辨率重建方面的性能显著优于现有技术。 这个图表展示了所提出的混合注意力变换器HAT在不同放大倍数x2, x3, x4和不同数据集Urban100和Manga109上的性能对比。HAT模型与其他最先进模型如SwinIR和EDT进行了比较。图表显示HAT在PSNR峰值信噪比度量上比SwinIR和EDT有显著提升。特别是在Urban100数据集上HAT的改进幅度介于0.3dB到1.2dB之间。HAT-L是HAT的一个更大的变体它在所有测试中都表现得非常好进一步证明了HAT模型的有效性。  这幅图描绘了混合注意力变换器HAT的整体架构及其关键组成部分的结构。HAT包括浅层特征提取深层特征提取以及图像重建三个主要步骤。在深层特征提取部分有多个残差混合注意力组RHAG每个组内包含多个混合注意力块HAB和一个重叠交叉注意块OCAB。HAB利用通道注意力块CAB和窗口式多头自注意力W-MSA在提取特征时考虑了通道之间和空间位置之间的相关性。OCAB进一步增强了不同窗口间特征的交互。最后经过多个RHAG处理的特征通过图像重建部分恢复成高分辨率的图像(这个在代码中均有体现这个注意力机制代码巨长700多行)。 2.1 混合注意力变换器HAT 混合注意力变换器HAT的设计理念是通过融合通道注意力和自注意力机制来提升单图像超分辨率重建的性能。通道注意力关注于识别哪些通道更重要而自注意力则关注于图像内部各个位置之间的关系。HAT利用这两种注意力机制有效地整合了全局的像素信息从而提供更为精确的上采样结果。这种结合使得HAT能够更好地重建高频细节提高重建图像的质量和精度。  这幅图表展示了不同超分辨率网络的局部归因图LAM结果以及对应的性能指标。LAM展示了在重建高分辨率HR图像中标记框内区域时输入的低分辨率LR图像中每个像素的重要性。扩散指数DI表示参与的像素范围数值越高表示使用的像素越多。结果表明HAT作者的模型在重建时使用了最多的像素相比于EDSR、RCAN和SwinIRHAT显示了最强的像素利用和最高的PSNR/SSIM性能指标。这表明HAT在精细化重建细节方面具有优势。  三、HAttention的核心代码 将下面的代码复制粘贴到ultralytics/nn/modules的目录下创建一个py文件粘贴进去我这里起名字的DAttention.py其它使用方式看章节四。 import math import torch import torch.nn as nn from basicsr.utils.registry import ARCH_REGISTRY from basicsr.archs.arch_util import to_2tuple, trunc_normal_ from einops import rearrangedef drop_path(x, drop_prob: float 0., training: bool False):Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.pyif drop_prob 0. or not training:return xkeep_prob 1 - drop_probshape (x.shape[0], ) (1, ) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNetsrandom_tensor keep_prob torch.rand(shape, dtypex.dtype, devicex.device)random_tensor.floor_() # binarizeoutput x.div(keep_prob) * random_tensorreturn outputclass DropPath(nn.Module):Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.pydef __init__(self, drop_probNone):super(DropPath, self).__init__()self.drop_prob drop_probdef forward(self, x):return drop_path(x, self.drop_prob, self.training)class ChannelAttention(nn.Module):Channel attention used in RCAN.Args:num_feat (int): Channel number of intermediate features.squeeze_factor (int): Channel squeeze factor. Default: 16.def __init__(self, num_feat, squeeze_factor16):super(ChannelAttention, self).__init__()self.attention nn.Sequential(nn.AdaptiveAvgPool2d(1),nn.Conv2d(num_feat, num_feat // squeeze_factor, 1, padding0),nn.ReLU(inplaceTrue),nn.Conv2d(num_feat // squeeze_factor, num_feat, 1, padding0),nn.Sigmoid())def forward(self, x):y self.attention(x)return x * yclass CAB(nn.Module):def __init__(self, num_feat, compress_ratio3, squeeze_factor30):super(CAB, self).__init__()self.cab nn.Sequential(nn.Conv2d(num_feat, num_feat // compress_ratio, 3, 1, 1),nn.GELU(),nn.Conv2d(num_feat // compress_ratio, num_feat, 3, 1, 1),ChannelAttention(num_feat, squeeze_factor))def forward(self, x):return self.cab(x)class Mlp(nn.Module):def __init__(self, in_features, hidden_featuresNone, out_featuresNone, act_layernn.GELU, drop0.):super().__init__()out_features out_features or in_featureshidden_features hidden_features or in_featuresself.fc1 nn.Linear(in_features, hidden_features)self.act act_layer()self.fc2 nn.Linear(hidden_features, out_features)self.drop nn.Dropout(drop)def forward(self, x):x self.fc1(x)x self.act(x)x self.drop(x)x self.fc2(x)x self.drop(x)return xdef window_partition(x, window_size):Args:x: (b, h, w, c)window_size (int): window sizeReturns:windows: (num_windows*b, window_size, window_size, c)b, h, w, c x.shapex x.view(b, h // window_size, window_size, w // window_size, window_size, c)windows x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, c)return windowsdef window_reverse(windows, window_size, h, w):Args:windows: (num_windows*b, window_size, window_size, c)window_size (int): Window sizeh (int): Height of imagew (int): Width of imageReturns:x: (b, h, w, c)b int(windows.shape[0] / (h * w / window_size / window_size))x windows.view(b, h // window_size, w // window_size, window_size, window_size, -1)x x.permute(0, 1, 3, 2, 4, 5).contiguous().view(b, h, w, -1)return xclass WindowAttention(nn.Module):r Window based multi-head self attention (W-MSA) module with relative position bias.It supports both of shifted and non-shifted window.Args:dim (int): Number of input channels.window_size (tuple[int]): The height and width of the window.num_heads (int): Number of attention heads.qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: Trueqk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if setattn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0proj_drop (float, optional): Dropout ratio of output. Default: 0.0def __init__(self, dim, window_size, num_heads, qkv_biasTrue, qk_scaleNone, attn_drop0., proj_drop0.):super().__init__()self.dim dimself.window_size window_size # Wh, Wwself.num_heads num_headshead_dim dim // num_headsself.scale qk_scale or head_dim**-0.5# define a parameter table of relative position biasself.relative_position_bias_table nn.Parameter(torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nHself.qkv nn.Linear(dim, dim * 3, biasqkv_bias)self.attn_drop nn.Dropout(attn_drop)self.proj nn.Linear(dim, dim)self.proj_drop nn.Dropout(proj_drop)trunc_normal_(self.relative_position_bias_table, std.02)self.softmax nn.Softmax(dim-1)def forward(self, x, rpi, maskNone):Args:x: input features with shape of (num_windows*b, n, c)mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or Noneb_, n, c x.shapeqkv self.qkv(x).reshape(b_, n, 3, self.num_heads, c // self.num_heads).permute(2, 0, 3, 1, 4)q, k, v qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)q q * self.scaleattn (q k.transpose(-2, -1))relative_position_bias self.relative_position_bias_table[rpi.view(-1)].view(self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nHrelative_position_bias relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Wwattn attn relative_position_bias.unsqueeze(0)if mask is not None:nw mask.shape[0]attn attn.view(b_ // nw, nw, self.num_heads, n, n) mask.unsqueeze(1).unsqueeze(0)attn attn.view(-1, self.num_heads, n, n)attn self.softmax(attn)else:attn self.softmax(attn)attn self.attn_drop(attn)x (attn v).transpose(1, 2).reshape(b_, n, c)x self.proj(x)x self.proj_drop(x)return xclass HAB(nn.Module):r Hybrid Attention Block.Args:dim (int): Number of input channels.input_resolution (tuple[int]): Input resolution.num_heads (int): Number of attention heads.window_size (int): Window size.shift_size (int): Shift size for SW-MSA.mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: Trueqk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.drop (float, optional): Dropout rate. Default: 0.0attn_drop (float, optional): Attention dropout rate. Default: 0.0drop_path (float, optional): Stochastic depth rate. Default: 0.0act_layer (nn.Module, optional): Activation layer. Default: nn.GELUnorm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNormdef __init__(self,dim,input_resolution,num_heads,window_size7,shift_size0,compress_ratio3,squeeze_factor30,conv_scale0.01,mlp_ratio4.,qkv_biasTrue,qk_scaleNone,drop0.,attn_drop0.,drop_path0.,act_layernn.GELU,norm_layernn.LayerNorm):super().__init__()self.dim dimself.input_resolution input_resolutionself.num_heads num_headsself.window_size window_sizeself.shift_size shift_sizeself.mlp_ratio mlp_ratioif min(self.input_resolution) self.window_size:# if window size is larger than input resolution, we dont partition windowsself.shift_size 0self.window_size min(self.input_resolution)assert 0 self.shift_size self.window_size, shift_size must in 0-window_sizeself.norm1 norm_layer(dim)self.attn WindowAttention(dim,window_sizeto_2tuple(self.window_size),num_headsnum_heads,qkv_biasqkv_bias,qk_scaleqk_scale,attn_dropattn_drop,proj_dropdrop)self.conv_scale conv_scaleself.conv_block CAB(num_featdim, compress_ratiocompress_ratio, squeeze_factorsqueeze_factor)self.drop_path DropPath(drop_path) if drop_path 0. else nn.Identity()self.norm2 norm_layer(dim)mlp_hidden_dim int(dim * mlp_ratio)self.mlp Mlp(in_featuresdim, hidden_featuresmlp_hidden_dim, act_layeract_layer, dropdrop)def forward(self, x, x_size, rpi_sa, attn_mask):h, w x_sizeb, _, c x.shape# assert seq_len h * w, input feature has wrong sizeshortcut xx self.norm1(x)x x.view(b, h, w, c)# Conv_Xconv_x self.conv_block(x.permute(0, 3, 1, 2))conv_x conv_x.permute(0, 2, 3, 1).contiguous().view(b, h * w, c)# cyclic shiftif self.shift_size 0:shifted_x torch.roll(x, shifts(-self.shift_size, -self.shift_size), dims(1, 2))attn_mask attn_maskelse:shifted_x xattn_mask None# partition windowsx_windows window_partition(shifted_x, self.window_size) # nw*b, window_size, window_size, cx_windows x_windows.view(-1, self.window_size * self.window_size, c) # nw*b, window_size*window_size, c# W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window sizeattn_windows self.attn(x_windows, rpirpi_sa, maskattn_mask)# merge windowsattn_windows attn_windows.view(-1, self.window_size, self.window_size, c)shifted_x window_reverse(attn_windows, self.window_size, h, w) # b h w c# reverse cyclic shiftif self.shift_size 0:attn_x torch.roll(shifted_x, shifts(self.shift_size, self.shift_size), dims(1, 2))else:attn_x shifted_xattn_x attn_x.view(b, h * w, c)# FFNx shortcut self.drop_path(attn_x) conv_x * self.conv_scalex x self.drop_path(self.mlp(self.norm2(x)))return xclass PatchMerging(nn.Module):r Patch Merging Layer.Args:input_resolution (tuple[int]): Resolution of input feature.dim (int): Number of input channels.norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNormdef __init__(self, input_resolution, dim, norm_layernn.LayerNorm):super().__init__()self.input_resolution input_resolutionself.dim dimself.reduction nn.Linear(4 * dim, 2 * dim, biasFalse)self.norm norm_layer(4 * dim)def forward(self, x):x: b, h*w, ch, w self.input_resolutionb, seq_len, c x.shapeassert seq_len h * w, input feature has wrong sizeassert h % 2 0 and w % 2 0, fx size ({h}*{w}) are not even.x x.view(b, h, w, c)x0 x[:, 0::2, 0::2, :] # b h/2 w/2 cx1 x[:, 1::2, 0::2, :] # b h/2 w/2 cx2 x[:, 0::2, 1::2, :] # b h/2 w/2 cx3 x[:, 1::2, 1::2, :] # b h/2 w/2 cx torch.cat([x0, x1, x2, x3], -1) # b h/2 w/2 4*cx x.view(b, -1, 4 * c) # b h/2*w/2 4*cx self.norm(x)x self.reduction(x)return xclass OCAB(nn.Module):# overlapping cross-attention blockdef __init__(self, dim,input_resolution,window_size,overlap_ratio,num_heads,qkv_biasTrue,qk_scaleNone,mlp_ratio2,norm_layernn.LayerNorm):super().__init__()self.dim dimself.input_resolution input_resolutionself.window_size window_sizeself.num_heads num_headshead_dim dim // num_headsself.scale qk_scale or head_dim**-0.5self.overlap_win_size int(window_size * overlap_ratio) window_sizeself.norm1 norm_layer(dim)self.qkv nn.Linear(dim, dim * 3, biasqkv_bias)self.unfold nn.Unfold(kernel_size(self.overlap_win_size, self.overlap_win_size), stridewindow_size, padding(self.overlap_win_size-window_size)//2)# define a parameter table of relative position biasself.relative_position_bias_table nn.Parameter(torch.zeros((window_size self.overlap_win_size - 1) * (window_size self.overlap_win_size - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nHtrunc_normal_(self.relative_position_bias_table, std.02)self.softmax nn.Softmax(dim-1)self.proj nn.Linear(dim,dim)self.norm2 norm_layer(dim)mlp_hidden_dim int(dim * mlp_ratio)self.mlp Mlp(in_featuresdim, hidden_featuresmlp_hidden_dim, act_layernn.GELU)def forward(self, x, x_size, rpi):h, w x_sizeb, _, c x.shapeshortcut xx self.norm1(x)x x.view(b, h, w, c)qkv self.qkv(x).reshape(b, h, w, 3, c).permute(3, 0, 4, 1, 2) # 3, b, c, h, wq qkv[0].permute(0, 2, 3, 1) # b, h, w, ckv torch.cat((qkv[1], qkv[2]), dim1) # b, 2*c, h, w# partition windowsq_windows window_partition(q, self.window_size) # nw*b, window_size, window_size, cq_windows q_windows.view(-1, self.window_size * self.window_size, c) # nw*b, window_size*window_size, ckv_windows self.unfold(kv) # b, c*w*w, nwkv_windows rearrange(kv_windows, b (nc ch owh oww) nw - nc (b nw) (owh oww) ch, nc2, chc, owhself.overlap_win_size, owwself.overlap_win_size).contiguous() # 2, nw*b, ow*ow, ck_windows, v_windows kv_windows[0], kv_windows[1] # nw*b, ow*ow, cb_, nq, _ q_windows.shape_, n, _ k_windows.shaped self.dim // self.num_headsq q_windows.reshape(b_, nq, self.num_heads, d).permute(0, 2, 1, 3) # nw*b, nH, nq, dk k_windows.reshape(b_, n, self.num_heads, d).permute(0, 2, 1, 3) # nw*b, nH, n, dv v_windows.reshape(b_, n, self.num_heads, d).permute(0, 2, 1, 3) # nw*b, nH, n, dq q * self.scaleattn (q k.transpose(-2, -1))relative_position_bias self.relative_position_bias_table[rpi.view(-1)].view(self.window_size * self.window_size, self.overlap_win_size * self.overlap_win_size, -1) # ws*ws, wse*wse, nHrelative_position_bias relative_position_bias.permute(2, 0, 1).contiguous() # nH, ws*ws, wse*wseattn attn relative_position_bias.unsqueeze(0)attn self.softmax(attn)attn_windows (attn v).transpose(1, 2).reshape(b_, nq, self.dim)# merge windowsattn_windows attn_windows.view(-1, self.window_size, self.window_size, self.dim)x window_reverse(attn_windows, self.window_size, h, w) # b h w cx x.view(b, h * w, self.dim)x self.proj(x) shortcutx x self.mlp(self.norm2(x))return xclass AttenBlocks(nn.Module): A series of attention blocks for one RHAG.Args:dim (int): Number of input channels.input_resolution (tuple[int]): Input resolution.depth (int): Number of blocks.num_heads (int): Number of attention heads.window_size (int): Local window size.mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: Trueqk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.drop (float, optional): Dropout rate. Default: 0.0attn_drop (float, optional): Attention dropout rate. Default: 0.0drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNormdownsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: Noneuse_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.def __init__(self,dim,input_resolution,depth,num_heads,window_size,compress_ratio,squeeze_factor,conv_scale,overlap_ratio,mlp_ratio4.,qkv_biasTrue,qk_scaleNone,drop0.,attn_drop0.,drop_path0.,norm_layernn.LayerNorm,downsampleNone,use_checkpointFalse):super().__init__()self.dim dimself.input_resolution input_resolutionself.depth depthself.use_checkpoint use_checkpoint# build blocksself.blocks nn.ModuleList([HAB(dimdim,input_resolutioninput_resolution,num_headsnum_heads,window_sizewindow_size,shift_size0 if (i % 2 0) else window_size // 2,compress_ratiocompress_ratio,squeeze_factorsqueeze_factor,conv_scaleconv_scale,mlp_ratiomlp_ratio,qkv_biasqkv_bias,qk_scaleqk_scale,dropdrop,attn_dropattn_drop,drop_pathdrop_path[i] if isinstance(drop_path, list) else drop_path,norm_layernorm_layer) for i in range(depth)])# OCABself.overlap_attn OCAB(dimdim,input_resolutioninput_resolution,window_sizewindow_size,overlap_ratiooverlap_ratio,num_headsnum_heads,qkv_biasqkv_bias,qk_scaleqk_scale,mlp_ratiomlp_ratio,norm_layernorm_layer)# patch merging layerif downsample is not None:self.downsample downsample(input_resolution, dimdim, norm_layernorm_layer)else:self.downsample Nonedef forward(self, x, x_size, params):for blk in self.blocks:x blk(x, x_size, params[rpi_sa], params[attn_mask])x self.overlap_attn(x, x_size, params[rpi_oca])if self.downsample is not None:x self.downsample(x)return xclass RHAG(nn.Module):Residual Hybrid Attention Group (RHAG).Args:dim (int): Number of input channels.input_resolution (tuple[int]): Input resolution.depth (int): Number of blocks.num_heads (int): Number of attention heads.window_size (int): Local window size.mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: Trueqk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.drop (float, optional): Dropout rate. Default: 0.0attn_drop (float, optional): Attention dropout rate. Default: 0.0drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNormdownsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: Noneuse_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.img_size: Input image size.patch_size: Patch size.resi_connection: The convolutional block before residual connection.def __init__(self,dim,input_resolution,depth,num_heads,window_size,compress_ratio,squeeze_factor,conv_scale,overlap_ratio,mlp_ratio4.,qkv_biasTrue,qk_scaleNone,drop0.,attn_drop0.,drop_path0.,norm_layernn.LayerNorm,downsampleNone,use_checkpointFalse,img_size224,patch_size4,resi_connection1conv):super(RHAG, self).__init__()self.dim dimself.input_resolution input_resolutionself.residual_group AttenBlocks(dimdim,input_resolutioninput_resolution,depthdepth,num_headsnum_heads,window_sizewindow_size,compress_ratiocompress_ratio,squeeze_factorsqueeze_factor,conv_scaleconv_scale,overlap_ratiooverlap_ratio,mlp_ratiomlp_ratio,qkv_biasqkv_bias,qk_scaleqk_scale,dropdrop,attn_dropattn_drop,drop_pathdrop_path,norm_layernorm_layer,downsampledownsample,use_checkpointuse_checkpoint)if resi_connection 1conv:self.conv nn.Conv2d(dim, dim, 3, 1, 1)elif resi_connection identity:self.conv nn.Identity()self.patch_embed PatchEmbed(img_sizeimg_size, patch_sizepatch_size, in_chans0, embed_dimdim, norm_layerNone)self.patch_unembed PatchUnEmbed(img_sizeimg_size, patch_sizepatch_size, in_chans0, embed_dimdim, norm_layerNone)def forward(self, x, x_size, params):return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size, params), x_size))) xclass PatchEmbed(nn.Module):r Image to Patch EmbeddingArgs:img_size (int): Image size. Default: 224.patch_size (int): Patch token size. Default: 4.in_chans (int): Number of input image channels. Default: 3.embed_dim (int): Number of linear projection output channels. Default: 96.norm_layer (nn.Module, optional): Normalization layer. Default: Nonedef __init__(self, img_size224, patch_size4, in_chans3, embed_dim96, norm_layerNone):super().__init__()img_size to_2tuple(img_size)patch_size to_2tuple(patch_size)patches_resolution [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]self.img_size img_sizeself.patch_size patch_sizeself.patches_resolution patches_resolutionself.num_patches patches_resolution[0] * patches_resolution[1]self.in_chans in_chansself.embed_dim embed_dimif norm_layer is not None:self.norm norm_layer(embed_dim)else:self.norm Nonedef forward(self, x):x x.flatten(2).transpose(1, 2) # b Ph*Pw cif self.norm is not None:x self.norm(x)return xclass PatchUnEmbed(nn.Module):r Image to Patch UnembeddingArgs:img_size (int): Image size. Default: 224.patch_size (int): Patch token size. Default: 4.in_chans (int): Number of input image channels. Default: 3.embed_dim (int): Number of linear projection output channels. Default: 96.norm_layer (nn.Module, optional): Normalization layer. Default: Nonedef __init__(self, img_size224, patch_size4, in_chans3, embed_dim96, norm_layerNone):super().__init__()img_size to_2tuple(img_size)patch_size to_2tuple(patch_size)patches_resolution [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]self.img_size img_sizeself.patch_size patch_sizeself.patches_resolution patches_resolutionself.num_patches patches_resolution[0] * patches_resolution[1]self.in_chans in_chansself.embed_dim embed_dimdef forward(self, x, x_size):x x.transpose(1, 2).contiguous().view(x.shape[0], self.embed_dim, x_size[0], x_size[1]) # b Ph*Pw creturn xclass Upsample(nn.Sequential):Upsample module.Args:scale (int): Scale factor. Supported scales: 2^n and 3.num_feat (int): Channel number of intermediate features.def __init__(self, scale, num_feat):m []if (scale (scale - 1)) 0: # scale 2^nfor _ in range(int(math.log(scale, 2))):m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))m.append(nn.PixelShuffle(2))elif scale 3:m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))m.append(nn.PixelShuffle(3))else:raise ValueError(fscale {scale} is not supported. Supported scales: 2^n and 3.)super(Upsample, self).__init__(*m)ARCH_REGISTRY.register() class HAT(nn.Module):r Hybrid Attention TransformerA PyTorch implementation of : Activating More Pixels in Image Super-Resolution Transformer.Some codes are based on SwinIR.Args:img_size (int | tuple(int)): Input image size. Default 64patch_size (int | tuple(int)): Patch size. Default: 1in_chans (int): Number of input image channels. Default: 3embed_dim (int): Patch embedding dimension. Default: 96depths (tuple(int)): Depth of each Swin Transformer layer.num_heads (tuple(int)): Number of attention heads in different layers.window_size (int): Window size. Default: 7mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: Trueqk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: Nonedrop_rate (float): Dropout rate. Default: 0attn_drop_rate (float): Attention dropout rate. Default: 0drop_path_rate (float): Stochastic depth rate. Default: 0.1norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.ape (bool): If True, add absolute position embedding to the patch embedding. Default: Falsepatch_norm (bool): If True, add normalization after patch embedding. Default: Trueuse_checkpoint (bool): Whether to use checkpointing to save memory. Default: Falseupscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reductionimg_range: Image range. 1. or 255.upsampler: The reconstruction reconstruction module. pixelshuffle/pixelshuffledirect/nearestconv/Noneresi_connection: The convolutional block before residual connection. 1conv/3convdef __init__(self,in_chans3,img_size64,patch_size1,embed_dim96,depths(6, 6, 6, 6),num_heads(6, 6, 6, 6),window_size7,compress_ratio3,squeeze_factor30,conv_scale0.01,overlap_ratio0.5,mlp_ratio4.,qkv_biasTrue,qk_scaleNone,drop_rate0.,attn_drop_rate0.,drop_path_rate0.1,norm_layernn.LayerNorm,apeFalse,patch_normTrue,use_checkpointFalse,upscale2,img_range1.,upsampler,resi_connection1conv,**kwargs):super(HAT, self).__init__()self.window_size window_sizeself.shift_size window_size // 2self.overlap_ratio overlap_rationum_in_ch in_chansnum_out_ch in_chansnum_feat 64self.img_range img_rangeif in_chans 3:rgb_mean (0.4488, 0.4371, 0.4040)self.mean torch.Tensor(rgb_mean).view(1, 3, 1, 1)else:self.mean torch.zeros(1, 1, 1, 1)self.upscale upscaleself.upsampler upsampler# relative position indexrelative_position_index_SA self.calculate_rpi_sa()relative_position_index_OCA self.calculate_rpi_oca()self.register_buffer(relative_position_index_SA, relative_position_index_SA)self.register_buffer(relative_position_index_OCA, relative_position_index_OCA)# ------------------------- 1, shallow feature extraction ------------------------- #self.conv_first nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)# ------------------------- 2, deep feature extraction ------------------------- #self.num_layers len(depths)self.embed_dim embed_dimself.ape apeself.patch_norm patch_normself.num_features embed_dimself.mlp_ratio mlp_ratio# split image into non-overlapping patchesself.patch_embed PatchEmbed(img_sizeimg_size,patch_sizepatch_size,in_chansembed_dim,embed_dimembed_dim,norm_layernorm_layer if self.patch_norm else None)num_patches self.patch_embed.num_patchespatches_resolution self.patch_embed.patches_resolutionself.patches_resolution patches_resolution# merge non-overlapping patches into imageself.patch_unembed PatchUnEmbed(img_sizeimg_size,patch_sizepatch_size,in_chansembed_dim,embed_dimembed_dim,norm_layernorm_layer if self.patch_norm else None)# absolute position embeddingif self.ape:self.absolute_pos_embed nn.Parameter(torch.zeros(1, num_patches, embed_dim))trunc_normal_(self.absolute_pos_embed, std.02)self.pos_drop nn.Dropout(pdrop_rate)# stochastic depthdpr [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule# build Residual Hybrid Attention Groups (RHAG)self.layers nn.ModuleList()for i_layer in range(self.num_layers):layer RHAG(dimembed_dim,input_resolution(patches_resolution[0], patches_resolution[1]),depthdepths[i_layer],num_headsnum_heads[i_layer],window_sizewindow_size,compress_ratiocompress_ratio,squeeze_factorsqueeze_factor,conv_scaleconv_scale,overlap_ratiooverlap_ratio,mlp_ratioself.mlp_ratio,qkv_biasqkv_bias,qk_scaleqk_scale,dropdrop_rate,attn_dropattn_drop_rate,drop_pathdpr[sum(depths[:i_layer]):sum(depths[:i_layer 1])], # no impact on SR resultsnorm_layernorm_layer,downsampleNone,use_checkpointuse_checkpoint,img_sizeimg_size,patch_sizepatch_size,resi_connectionresi_connection)self.layers.append(layer)self.norm norm_layer(self.num_features)# build the last conv layer in deep feature extractionif resi_connection 1conv:self.conv_after_body nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)elif resi_connection identity:self.conv_after_body nn.Identity()# ------------------------- 3, high quality image reconstruction ------------------------- #if self.upsampler pixelshuffle:# for classical SRself.conv_before_upsample nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), nn.LeakyReLU(inplaceTrue))self.upsample Upsample(upscale, num_feat)self.conv_last nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)self.apply(self._init_weights)def _init_weights(self, m):if isinstance(m, nn.Linear):trunc_normal_(m.weight, std.02)if isinstance(m, nn.Linear) and m.bias is not None:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.LayerNorm):nn.init.constant_(m.bias, 0)nn.init.constant_(m.weight, 1.0)def calculate_rpi_sa(self):# calculate relative position index for SAcoords_h torch.arange(self.window_size)coords_w torch.arange(self.window_size)coords torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Wwcoords_flatten torch.flatten(coords, 1) # 2, Wh*Wwrelative_coords coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Wwrelative_coords relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2relative_coords[:, :, 0] self.window_size - 1 # shift to start from 0relative_coords[:, :, 1] self.window_size - 1relative_coords[:, :, 0] * 2 * self.window_size - 1relative_position_index relative_coords.sum(-1) # Wh*Ww, Wh*Wwreturn relative_position_indexdef calculate_rpi_oca(self):# calculate relative position index for OCAwindow_size_ori self.window_sizewindow_size_ext self.window_size int(self.overlap_ratio * self.window_size)coords_h torch.arange(window_size_ori)coords_w torch.arange(window_size_ori)coords_ori torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, ws, wscoords_ori_flatten torch.flatten(coords_ori, 1) # 2, ws*wscoords_h torch.arange(window_size_ext)coords_w torch.arange(window_size_ext)coords_ext torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, wse, wsecoords_ext_flatten torch.flatten(coords_ext, 1) # 2, wse*wserelative_coords coords_ext_flatten[:, None, :] - coords_ori_flatten[:, :, None] # 2, ws*ws, wse*wserelative_coords relative_coords.permute(1, 2, 0).contiguous() # ws*ws, wse*wse, 2relative_coords[:, :, 0] window_size_ori - window_size_ext 1 # shift to start from 0relative_coords[:, :, 1] window_size_ori - window_size_ext 1relative_coords[:, :, 0] * window_size_ori window_size_ext - 1relative_position_index relative_coords.sum(-1)return relative_position_indexdef calculate_mask(self, x_size):# calculate attention mask for SW-MSAh, w x_sizeimg_mask torch.zeros((1, h, w, 1)) # 1 h w 1h_slices (slice(0, -self.window_size), slice(-self.window_size,-self.shift_size), slice(-self.shift_size, None))w_slices (slice(0, -self.window_size), slice(-self.window_size,-self.shift_size), slice(-self.shift_size, None))cnt 0for h in h_slices:for w in w_slices:img_mask[:, h, w, :] cntcnt 1mask_windows window_partition(img_mask, self.window_size) # nw, window_size, window_size, 1mask_windows mask_windows.view(-1, self.window_size * self.window_size)attn_mask mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)attn_mask attn_mask.masked_fill(attn_mask ! 0, float(-100.0)).masked_fill(attn_mask 0, float(0.0))return attn_masktorch.jit.ignoredef no_weight_decay(self):return {absolute_pos_embed}torch.jit.ignoredef no_weight_decay_keywords(self):return {relative_position_bias_table}def forward_features(self, x):x_size (x.shape[2], x.shape[3])# Calculate attention mask and relative position index in advance to speed up inference.# The original code is very time-consuming for large window size.attn_mask self.calculate_mask(x_size).to(x.device)params {attn_mask: attn_mask, rpi_sa: self.relative_position_index_SA, rpi_oca: self.relative_position_index_OCA}x self.patch_embed(x)if self.ape:x x self.absolute_pos_embedx self.pos_drop(x)for layer in self.layers:x layer(x, x_size, params)x self.norm(x) # b seq_len cx self.patch_unembed(x, x_size)return xdef forward(self, x):self.mean self.mean.type_as(x)x (x - self.mean) * self.img_rangeif self.upsampler pixelshuffle:# for classical SRx self.conv_first(x)x self.conv_after_body(self.forward_features(x)) xx self.conv_before_upsample(x)x self.conv_last(self.upsample(x))x x / self.img_range self.meanreturn x四、手把手教你添加HAttention机制  这个HAttention代码刚拿来不能够直接使用的我在官方的代码基础上做了一定的修改方便大家使用所以希望大家给博主点点赞收藏以下如果你能够成功复现希望大家给博文评论支持以下。 下面是使用教程- 修改一 在上面我们已经将代码复制粘贴到ultralytics/nn/modules的目录下创建一个py文件粘贴进去DAttention.py。下面我们找到文件ultralytics/nn/tasks.py在开头导入我们的注意力机制如下图所示。 修改二  我们找到七百多行的代码按照我的方法进行添加可以看到红框内有好多代码我们只保留字典里你需要的DAT就行其余的你没有大家不用添加。 elif m in {HAT}:args [ch[f], *args] 到此就修改完成了我们直接就可以使用该代码了(为什么这么简单是因为我修改了官方的代码让使用方法统一起来所以大家用着很简单。)  五、HAttention的yaml文件 在这里我给大家推荐两种添加的方式像这种注意力机制不要添加在主干上添加在检测头里涨点效果最好或者Neck的输出部分是最好的你放在主干上后面经过各种处理信息早已经丢失了所以没啥效果。 5.1 HAttention的yaml文件一 这个我在大目标检测的输出添加了一个HAttention注意力机制也是我实验跑出来的版本这个文章是有个读者指定的所以实验结果都是刚刚出炉的后面大家有什么想看的机制都可以指定。 # Ultralytics YOLO , AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. modelyolov8n.yaml will call yolov8.yaml with scale n# [depth, width, max_channels]n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPss: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPsm: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPsl: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOP# YOLOv8.0n backbone backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 3, C2f, [128, True]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 6, C2f, [256, True]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 6, C2f, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 3, C2f, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n head head:- [-1, 1, nn.Upsample, [None, 2, nearest]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 3, C2f, [512]] # 12- [-1, 1, nn.Upsample, [None, 2, nearest]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 3, C2f, [256]] # 15 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 12], 1, Concat, [1]] # cat head P4- [-1, 3, C2f, [512]] # 18 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 9], 1, Concat, [1]] # cat head P5- [-1, 3, C2f, [1024]] # 21 (P5/32-large)- [-1, 1, HAT, []] # 22- [[15, 18, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)5.2 HAttention的yaml文件二 这个版本在三个目标检测层都添加了HAT机制具体效果我没有尝试但是此版本估计显存需要的比较大使用时候需要注意降低一定的batch否则爆显存的错误大家尽量不要在评论区评论有时候真的被大家搞得很无奈一些低级报错发在评论区好像我发的机制有问题刚才有一个同学用我的SPD-Conv里面报错autopad就是没有导入这个模块他发在了评论区我觉得这 就是简单导入一下这个模块鼠标放在上面点一下的问题我希望大家看我的博客的同时也要提高自己的动手能力我也是真希望大家能够学到一些只会照搬也不行的。 # Ultralytics YOLO , AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. modelyolov8n.yaml will call yolov8.yaml with scale n# [depth, width, max_channels]n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPss: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPsm: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPsl: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOP# YOLOv8.0n backbone backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 3, C2f, [128, True]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 6, C2f, [256, True]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 6, C2f, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 3, C2f, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n head head:- [-1, 1, nn.Upsample, [None, 2, nearest]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 3, C2f, [512]] # 12- [-1, 1, nn.Upsample, [None, 2, nearest]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 3, C2f, [256]] # 15 (P3/8-small)- [-1, 1, HAT, []] # 16- [-1, 1, Conv, [256, 3, 2]]- [[-1, 12], 1, Concat, [1]] # cat head P4- [-1, 3, C2f, [512]] # 19 (P4/16-medium)- [-1, 1, HAT, []] # 20- [-1, 1, Conv, [512, 3, 2]]- [[-1, 9], 1, Concat, [1]] # cat head P5- [-1, 3, C2f, [1024]] # 23 (P5/32-large)- [-1, 1, HAT, []] # 24- [[16, 20, 24], 1, Detect, [nc]] # Detect(P3, P4, P5)5.3 推荐HAttention可添加的位置  HAttention是一种即插即用的注意力机制模块其可以添加的位置有很多添加的位置不同效果也不同所以我下面推荐几个添加的位置大家可以进行参考当然不一定要按照我推荐的地方添加。 残差连接中在残差网络的残差连接中加入MHSA。 Neck部分YOLOv8的Neck部分负责特征融合这里添加MSDA可以帮助模型更有效地融合不同层次的特征(yaml文件一和二)。 Backbone可以替换中干网络中的卷积部分(只能替换不改变通道数的卷积) 能添加的位置很多一篇文章很难全部介绍到后期我会发文件里面集成上百种的改进机制然后还有许多融合模块给大家尤其是检测头里改进非常困难这些属于进阶篇后期会发。 5.4 HAttention的训练过程截图  下面是添加了HAttention的训练截图。 大家可以看下面的运行结果和添加的位置所以不存在我发的代码不全或者运行不了的问题大家有问题也可以在评论区评论我看到都会为大家解答(我知道的)这里我运行的时候有一个警告我没有关估计也不影响运行和精度就没去处理。 ​ ​​​​​​ 五、本文总结 到此本文的正式分享内容就结束了在这里给大家推荐我的YOLOv8改进有效涨点专栏本专栏目前为新开的平均质量分98分后期我会根据各种最新的前沿顶会进行论文复现也会对一些老的改进机制进行补充目前本专栏免费阅读(暂时大家尽早关注不迷路~)如果大家觉得本文帮助到你了订阅本专栏关注后续更多的更新~ 专栏回顾YOLOv8改进系列专栏——本专栏持续复习各种顶会内容——科研必备 ​​

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/87900.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

大力推广建设电子商务网站技术定制网站设计公司

相关文章 sql 的 join、left join、full join的区别图解总结,测试,注意事项 1. 结论示意图 对于intersect、minus,oracle支持,mysql不支持,可以变通(in或exists)实现 2.测试 2.1.创建表和数…

wordpress 获取分类目录seo网站推广技术

今日任务 62.不同路径 63. 不同路径 II 62.不同路径 - Medium 题目链接:力扣(LeetCode)官网 - 全球极客挚爱的技术成长平台 一个机器人位于一个 m x n 网格的左上角 (起始点在下图中标记为 “Start” )。 机器人每次只…

dw如何用表格做网站医疗网站前置审批要多长时间

前言: OceanBase Cloud Platform(简称OCP),是 OceanBase数据库的专属企业级数据库管理平台。 在实际生产环境中,OCP的安装通常是第一步,先搭建OCP平台,进而依赖OCP来创建、管理和监控我们的生…

php网站建设流程凡科电脑版登录首页

3妹:2哥,你有没有看到新闻“18岁父亲为4岁儿子落户现身亲子鉴定” 2哥 : 啥?18岁就当爹啦? 3妹:确切的说是14岁好吧。 2哥 : 哎,想我30了, 还是个单身狗。 3妹:别急啊, 2…

用asp做网站系统步骤做社区生意的网站

来源:北京城市实验室BCL随着计算机技术的飞速发展,城市信息学作为城市规划领域的一门新兴学科,逐渐引起学术界的关注。城市信息学的兴起给城市规划带来了新的压力,但它也提供了新的城市分析视角。在此背景下,专家小组概…

邢台网站推广专业服务公司展厅效果图

背景为了兼容多种业务,想在项目中使用一些设计模式,以便于管理。例如,我需要创建用户并返回userid,每种业务创建的方式都不一样。我选取了“适配器模式”,但是我发现,在springmvc的管理下,常常需…

互联网精准营销公司seo 费用

文章目录求余函数和取模函数的区别x 和 y 符号不同,求余数和模数的技巧求余的运算规律(技巧)取模的运算规律(技巧)求余函数和取模函数的区别 求余函数rem(x,y) 和取模函数 mod(x,y) 的区别: 当 x 和 y 的符…

做 网络网站烟台市建设工程质量监督站网站

让我们聊聊这个话题, django如何存数据至mysql数据表里面,你会用什么方法?正常情况下,我们form逻辑处理后,直接form.save(),是,这个方法没毛病;但有没有其他的方法呢?假如…

曲阜网站建设哪家好有自建服务器做网站的吗

「何」到底该读「なん」还是「なに」? 首先,讲一个规律,大家记住就行。当「何」后面所接单词的第一个发音在“た”、“だ”、“な”行时,读作“なん”。一般这种情况下,后面跟的是の、でも、です和だ。 用例&#xff…

node.js做网站开发彩票游戏网站建设

说到国外Lead广告联盟,可能很多人会问这是啥?其实呢,作为搞外贸的,如果你想增加你的收入,做国外广告联盟也是不错的选择,只要你有正确的方法和策略,就能够成功赚取丰厚的佣金。今天龙哥我就给大…

网站支付怎么做安全吗都兰县建设局交通局网站

在POE供电系统实际应用中,有很多受电设备AP或移动基站的基站AP与天线,经常要安装于建筑物的高端点,因此这些受电设备也成为遭受雷击的高发点。所以,对于这类的数据网络中心,不仅要考虑建设良好的直击雷防护网与良好的接…

临桂城乡建设局网站新氧整形网站开发模版

一、进程考虑一个场景:浏览器,网易云音乐以及notepad 三个软件只能顺序执行是怎样一种场景呢?另外,假如有两个程序A和B,程序A在执行到一半的过程中,需要读取大量的数据输入(I/O操作),而此时CPU只…

o2o电商网站建设南山建站公司

一、实验名称: 网络分析 二、实验目的: 通过本实验练习,掌握空间数据网络分析的基本方法。 三、实验内容和要求: 实验内容: 利用ARCGIS软件网络分析工具及相关空间数据,查找距离“名人故居”、“博物…

山东住房和城乡建设厅网站网站 没有域名需要备案吗

ElasticSearch基础篇 安装 官网 下载地址 下载完成后对文件进行解压,项目结构如下 进入bin目录点击elasticsearch.bat启动服务 9300 端口为 Elasticsearch 集群间组件的通信端口, 9200 端口为浏览器访问的 http协议 RESTful 端口 打开浏览器&#…

怎样看网站的建设时间90设计网站官网入口

目录 介绍静态库与动态链接库静态库动态链接库 如何将第三方库集成到VS上VS属性管理器配置静态库配置动态链接库属性管理器其他的内容MKL库的安装boost库的安装 介绍 众所周知,.c文件或者.cpp文件变成.exe文件需要经历四个过程 分别是预处理,编译&#…

jsp网站开发技术自学做网站可以赚钱吗

数字包括文本型数字和数值型数字两种形式,数值型数字可以计算,文本型数字不能计算。 例1:文本型数字不能计算 例2:数值型数字可以计算 数值型数字如果输入大于11位数,则会显示为科学计数法。如果输入001,则…

帮客户做网站 没签合同咋办合肥软件开发公司

表对象标识 kingbase中表作为数据库对象具有一个系统内部的唯一标识符,这个标识符被称为oid(对象标识符),它是kingbase用来在整个数据集群中唯一地标识每个数据库对象的一个字段。对于表来说,其OID可以在系统目录表sy…

广州网站开发哪家专业江阴企业网站制作

Python中的Numpy的基本知识 Copyright © Microsoft Corporation. All rights reserved. 适用于License版权许可 更多微软人工智能学习资源,请见微软人工智能教育与学习共建社区 以下列出一些关于Numpy矩阵运算的基本知识和坑点。 首先需要在命令行中安装Num…

aaaa景区网站建设标准苏州市住房城乡建设局网站首页

1. 什么是Hudi Hudi(Hadoop Upserts Deletes and Incrementals)是一个开源的数据湖工具,用于管理大规模数据湖中的数据。 Hudi旨在解决数据湖中常见的一些挑战,如数据的增量更新、删除和查询等。它提供了一套API和工具,可以帮助用户在数据湖中进行写入、更新、删除和查询等…

苏州建设交易中心网站平台宣传推广策略有哪些

目录 介绍 概念 性质 模拟实现 结点定义 插入 保证平衡的原因 一般情况 特殊情况(uncle为黑) uncle不存在 旋转方式 右旋 迭代器 -- 代码 介绍 概念 红黑树是一种自平衡的二叉搜索树 它是在每个节点上引入额外的颜色信息,通过对任何一条从根到叶子的路径…