深度学习论文:Dynamic ReLU及其PyTorch实现

网友投稿 881 2022-08-31

深度学习论文:Dynamic ReLU及其PyTorch实现

深度学习论文:Dynamic ReLU及其PyTorch实现

Dynamic ReLU PDF: ​​​​​DY-ReLU

提出的动态ReLU,能够根据输入动态地调整对应的分段激活函数,与ReLU及其变种对比,仅需额外的少量计算即可带来大幅的性能提升,能无缝嵌入到当前的主流模型中。

2 DY-ReLU变种

提供了三种形态的DY-ReLU,在空间位置和维度上有不同的共享机制。

import torchimport torch.nn as nnimport torchvisionimport torch.nn.functional as Fclass BatchNorm(nn.Module): def forward(self, x): return 2 * x - 1class DynamicReLU_A(nn.Module): def __init__(self, channels, K=2,ratio=6): super(DynamicReLU_A, self).__init__() mid_channels = 2*K self.K = K self.lambdas = torch.Tensor([1.]*K + [0.5]*K).float() self.init_v = torch.Tensor([1.] + [0.]*(2*K - 1)).float() self.avg_pool = nn.AdaptiveAvgPool2d(output_size=1) self.dynamic = nn.Sequential( nn.Linear(in_features=channels,out_features=channels // ratio), nn.ReLU(inplace=True), nn.Linear(in_features=channels // ratio, out_features=mid_channels), nn.Sigmoid(), BatchNorm() ) def forward(self, x): b, c, _, _ = x.size() y = self.avg_pool(x).view(b, c) z = self.dynamic(y) relu_coefs = z.view(-1, 2 * self.K) * self.lambdas + self.init_v x_perm = x.transpose(0, -1).unsqueeze(-1) output = x_perm * relu_coefs[:, :self.K] + relu_coefs[:, self.K:] output = torch.max(output, dim=-1)[0].transpose(0, -1) return outputclass DynamicReLU_B(nn.Module): def __init__(self, channels, K=2,ratio=6): super(DynamicReLU_B, self).__init__() mid_channels = 2*K*channels self.K = K self.channels = channels self.lambdas = torch.Tensor([1.]*K + [0.5]*K).float() self.init_v = torch.Tensor([1.] + [0.]*(2*K - 1)).float() self.avg_pool = nn.AdaptiveAvgPool2d(output_size=1) self.dynamic = nn.Sequential( nn.Linear(in_features=channels,out_features=channels // ratio), nn.ReLU(inplace=True), nn.Linear(in_features=channels // ratio, out_features=mid_channels), nn.Sigmoid(), BatchNorm() ) def forward(self, x): b, c, _, _ = x.size() y = self.avg_pool(x).view(b, c) z = self.dynamic(y) relu_coefs = z.view(-1, self.channels, 2 * self.K) * self.lambdas + self.init_v x_perm = x.permute(2, 3, 0, 1).unsqueeze(-1) output = x_perm * relu_coefs[:, :, :self.K] + relu_coefs[:, :, self.K:] output = torch.max(output, dim=-1)[0].permute(2, 3, 0, 1) return outputif __name__=='__main__': model = DynamicReLU_B(64) print(model) input = torch.randn(1, 64, 56, 56) out = model(input) print(out.shape)

3 与其他常见激活函数对比

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:国内首款 | Go语言微服务框架发布!(国内首款内生安全交换芯片)
下一篇:python设计模式之工厂模式(Factory Pattern)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~