python

超轻量级php框架startmvc

Pytorch实现各种2d卷积示例

更新时间:2020-08-17 08:24:01 作者:startmvc
普通卷积使用nn.Conv2d(),一般还会接上BN和ReLu参数量NNCin*Cout+Cout(如果有bias,相对来说表示

普通卷积

使用nn.Conv2d(),一般还会接上BN和ReLu

参数量NNCin*Cout+Cout(如果有bias,相对来说表示对参数量影响很小,所以后面不考虑)


class ConvBNReLU(nn.Module):

 def __init__(self, C_in, C_out, kernel_size, stride, padding, affine=True):
 super(ConvBNReLU, self).__init__()
 self.op = nn.Sequential(
 nn.Conv2d(C_in, C_out, kernel_size, stride=stride, padding=padding, bias=False),
 nn.BatchNorm2d(C_out, eps=1e-3, affine=affine),
 nn.ReLU(inplace=False)
 )

 def forward(self, x):
 return self.op(x)

深度可分离卷积depthwise separable convolution

卷积操作可以分为NN 的Depthwise卷积(不改变通道数)和11的Pointwise卷积(改变为输出通道数),同样后接BN,ReLU。参数量明显减少

参数量:

NNCin+Cin11*Cout


class SepConv(nn.Module):
 
 def __init__(self, C_in, C_out, kernel_size, stride, padding, affine=True):
 super(SepConv, self).__init__()
 self.op = nn.Sequential(
 nn.ReLU(inplace=False),
 nn.Conv2d(C_in, C_in, kernel_size=kernel_size, stride=stride, padding=padding, groups=C_in, bias=False),
 nn.Conv2d(C_in, C_out, kernel_size=1, padding=0, bias=False),
 nn.BatchNorm2d(C_out, eps=1e-3, affine=affine)
 )
 def forward(self, x):
 return self.op(x)

空洞卷积dilated convolution

空洞卷积(dilated convolution)是针对图像语义分割问题中下采样会降低图像分辨率、丢失信息而提出的一种卷积思路。利用添加空洞扩大感受野。

参数量不变,但感受野增大(可结合深度可分离卷积实现)


class DilConv(nn.Module):
 
 def __init__(self, C_in, C_out, kernel_size, stride, padding, dilation, affine=True):
 super(DilConv, self).__init__()
 self.op = nn.Sequential(
 nn.ReLU(inplace=False),
 nn.Conv2d(C_in, C_in, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=C_in, bias=False),
 nn.Conv2d(C_in, C_out, kernel_size=1, padding=0, bias=False),
 nn.BatchNorm2d(C_out, eps=1e-3, affine=affine),
 )

 def forward(self, x):
 return self.op(x)

Identity

这个其实不算卷积操作,但是在实现跨层传递捷径


class Identity(nn.Module):

 def __init__(self):
 super(Identity, self).__init__()

 def forward(self, x):
 return x

以上这篇Pytorch实现各种2d卷积示例就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持脚本之家。

Pytorch 2d 卷积