-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement SK Module in Glasses #283
base: develop
Are you sure you want to change the base?
Implement SK Module in Glasses #283
Conversation
|
||
|
||
def test_att(): | ||
x = torch.rand(1, 48, 8, 8) | ||
x = torch.rand(2, 48, 8, 8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cuz there is BatchNorm2d in SelectiveKernel, it will turn out some error with batch=1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes! put the module in .eval()
mode
Codecov Report
@@ Coverage Diff @@
## develop #283 +/- ##
===========================================
+ Coverage 97.28% 97.31% +0.03%
===========================================
Files 86 87 +1
Lines 3056 3203 +147
===========================================
+ Hits 2973 3117 +144
- Misses 83 86 +3
Continue to review full report at Codecov.
|
record the older code import torch
import torch.nn as nn
from typing import Union, List
from glasses.nn.att.utils import make_divisible
from ..blocks import ConvBnAct
from einops.layers.torch import Rearrange, Reduce
def _kernel_valid(k):
if isinstance(k, (list, tuple)):
for ki in k:
return _kernel_valid(ki)
assert k >=3 and k % 2
class SelectiveKernelAtt(nn.Module):
def __init__(
self,
features: int,
num_paths: int = 2,
mid_features: int = 32,
act_layer: nn.Module = nn.ReLU,
norm_layer: nn.Module = nn.BatchNorm2d,
):
super().__init__()
self.num_paths = num_paths
self.att = nn.Sequential(
Reduce("b n c h w -> b c h w", reduction="sum"),
Reduce("b c h w -> b c 1 1", reduction="mean"),
nn.Conv2d(features, mid_features, kernel_size=1, bias=False),
norm_layer(mid_features),
act_layer(inplace=True),
nn.Conv2d(mid_features, features * num_paths, kernel_size=1, bias=False),
Rearrange('b (n c) h w -> b n c h w', n=num_paths, c=features),
nn.Softmax(dim=1),
)
def forward(self, x):
assert x.shape[1] == self.num_paths
x = self.att(x)
return x
class SelectiveKernel(nn.Module):
def __init__(
self,
in_features: int,
out_features: int = None,
kernel_size: Union[List, int] = None,
stride: int = 1,
dilation: int = 1,
groups: int = 1,
reduction: int = 16,
reduction_divisor: int = 8,
reduced_features: int = None,
keep_3x3: bool = True,
activation: nn.Module = nn.ReLU,
normalization: nn.Module = nn.BatchNorm2d,
):
super().__init__()
out_features = out_features or in_features
kernel_size = kernel_size or [3, 5]
_kernel_valid(kernel_size)
if not isinstance(kernel_size, list):
kernel_size = [kernel_size] * 2
if keep_3x3:
dilation = [dilation * (k - 1) // 2 for k in kernel_size]
kernel_size = [3] * len(kernel_size)
else:
dilation = [dilation] * len(kernel_size)
self.num_paths = len(kernel_size)
self.in_features = in_features
self.out_features = out_features,
groups = min(out_features, groups)
self.paths = nn.ModuleList([
ConvBnAct(in_features = in_features,
out_features = out_features,
activation = activation,
normalization=normalization,
mode = "same",
stride=stride,
kernel_size=k,
dilation=d)
for k, d in zip(kernel_size, dilation)
])
attn_features = reduced_features or make_divisible(out_features // reduction, divisor=reduction_divisor)
self.attn = SelectiveKernelAtt(out_features, self.num_paths, attn_features)
def forward(self, x):
x_paths = [op(x) for op in self.paths] # b, c, h, w
x = torch.stack(x_paths, dim=1) # b, n, c, h, w
x_attn = self.attn(x)
x = x * x_attn
return torch.sum(x, dim=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR. Let's
- add typing
- remove bad practices such as:
if not isinstance(kernel_size, list):
kernel_size = [kernel_size] * 2
if keep_3x3:
dilation = [1 * (k - 1) // 2 for k in kernel_size]
kernel_size = [3] * len(kernel_size)
else:
dilation = [1 * (k - 1) // 2 for k in kernel_size]
- decuple each part of the module
- let the user pass a black, to default
ConvBnAct
Sure, I will update my code tonight~, thanks for reviewing |
Paper
Reference
TODO