注意力机制论文:Concurrent Spatial and Channel SE in Fully Convolutional Networks及其Pytorch实现

网友投稿 866 2022-10-06

注意力机制论文:Concurrent Spatial and Channel SE in Fully Convolutional Networks及其Pytorch实现

注意力机制论文:Concurrent Spatial and Channel SE in Fully Convolutional Networks及其Pytorch实现

Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks PDF: ​​​​​​​概述

本文对​​SE模块​​进行了改进,设计了三种 SE 变形结构cSE、sSE、scSE,在 MRI 脑分割 和 CT 器官分割任务上取得了可观的改进。

2 Spatial Squeeze and Channel Excitation Block (cSE)

即原始的SE Block , 详细见 ​​Attention论文:Squeeze-and-Excitation Networks及其PyTorch实现​​ PyTorch代码:

class SE_Module(nn.Module): def __init__(self, channel,ratio = 16): super(SE_Module, self).__init__() self.squeeze = nn.AdaptiveAvgPool2d(1) self.excitation = nn.Sequential( nn.Linear(in_features=channel, out_features=channel // ratio), nn.ReLU(inplace=True), nn.Linear(in_features=channel // ratio, out_features=channel), nn.Sigmoid() ) def forward(self, x): b, c, _, _ = x.size() y = self.squeeze(x).view(b, c) z = self.excitation(y).view(b, c, 1, 1) return x * z.expand_as(x)

3 Channel Squeeze and Spatial Excitation Block (sSE)

PyTorch代码:

class sSE_Module(nn.Module): def __init__(self, channel): super(sSE_Module, self).__init__() self.spatial_excitation = nn.Sequential( nn.Conv2d(in_channels=channel, out_channels=1, kernel_size=1,stride=1,padding=0), nn.Sigmoid() ) def forward(self, x): z = self.spatial_excitation(x) return x * z.expand_as(x)

4 Spatial and Channel Squeeze & Excitation Block (scSE)

PyTorch代码:

class scSE_Module(nn.Module): def __init__(self, channel,ratio = 16): super(scSE_Module, self).__init__() self.cSE = cSE_Module(channel,ratio) self.sSE = sSE_Module(channel) def forward(self, x): return self.cSE(x) + self.sSE(x)

5 实验结果

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:微信小程序中滚动消息通知的实现(微信消息滚动通知栏)
下一篇:微信小程序如何获取openid及用户信息(微信小程序openid怎么获取)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~