-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CodeCamp2023-367] Add pp_mobileseg model #3239
Conversation
We recommend using English or English & Chinese for pull requests so that we could have broader discussion. |
Hi @Yang-Changhui, We'd like to express our appreciation for your valuable contributions to the mmsegmentation. Your efforts have significantly aided in enhancing the project's quality. If you're on WeChat, we'd also love for you to join our community there. Just add our assistant using the WeChat ID: openmmlabwx. When sending the friend request, remember to include the remark "mmsig + Github ID". Thanks again for your awesome contribution, and we're excited to have you as part of our community! |
from mmengine.model import BaseModule | ||
from mmengine.runner.checkpoint import CheckpointLoader, load_state_dict | ||
|
||
from mmseg.models.utils.transformer_utils import DropPath |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't find this file.
We might modify it like https://github.com/open-mmlab/mmsegmentation/blob/main/mmseg/models/backbones/swin.py#L178
cfg1, | ||
cfg2, | ||
cfg3, | ||
cfg4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, we should use meaningful parameter names.
class ConvBNAct(nn.Module): | ||
def __init__(self, | ||
in_channels, | ||
out_channels, | ||
kernel_size=1, | ||
stride=1, | ||
padding=0, | ||
groups=1, | ||
conv_cfg=dict(type='Conv'), | ||
norm_cfg=dict(type='BN'), | ||
act_cfg=None, | ||
bias_attr=False): | ||
super(ConvBNAct, self).__init__() | ||
|
||
self.conv = build_conv_layer( | ||
conv_cfg, | ||
in_channels=in_channels, | ||
out_channels=out_channels, | ||
kernel_size=kernel_size, | ||
stride=stride, | ||
padding=padding, | ||
groups=groups, | ||
bias=None if bias_attr else False) | ||
self.act = build_activation_layer(act_cfg) if act_cfg is not None else nn.Identity() | ||
self.bn = build_norm_layer(norm_cfg, out_channels)[1] \ | ||
if norm_cfg is not None else nn.Identity() | ||
|
||
def forward(self, x): | ||
x = self.conv(x) | ||
x = self.bn(x) | ||
x = self.act(x) | ||
return x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its function is the same as ConvModule
, might use ConvModule
instead.
class Conv2DBN(nn.Module): | ||
def __init__(self, | ||
in_channels, | ||
out_channels, | ||
ks=1, | ||
stride=1, | ||
pad=0, | ||
dilation=1, | ||
groups=1, | ||
): | ||
super().__init__() | ||
self.conv_norm = ConvModule(in_channels, out_channels, ks, stride, pad, dilation, groups, False, | ||
norm_cfg=dict(type='BN'), act_cfg=None) | ||
|
||
def forward(self, inputs): | ||
out = self.conv_norm(inputs) | ||
return out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's no need to define this class.
self.conv1 = build_conv_layer( | ||
conv_cfg, | ||
in_channels=channel, | ||
out_channels=channel // reduction, | ||
kernel_size=1, | ||
stride=1, | ||
padding=0) | ||
self.relu = build_activation_layer(act_cfg) | ||
self.conv2 = build_conv_layer( | ||
conv_cfg, | ||
in_channels=channel // reduction, | ||
out_channels=channel, | ||
kernel_size=1, | ||
stride=1, | ||
padding=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might instead with two ConvModule
.
class Sea_Attention(nn.Module): | ||
def __init__(self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might rename to SeaAttention
.
class Fusion_block(nn.Module): | ||
def __init__(self, inp, oup, embed_dim, activations=None) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might rename to FusionBlock
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And the names of parameters are not readable, such as inp
, oup
. It suggests changing to names more readable.
|
||
def _create_act(act): | ||
if act == "hardswish": | ||
return nn.Hardswish() | ||
elif act == "relu": | ||
return nn.ReLU() | ||
elif act is None: | ||
return None | ||
else: | ||
raise RuntimeError( | ||
"The activation function is not supported: {}".format(act)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we replace it with build_activation_layer
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
build_activation_layer好像没有nn.Hardswish()有这个类
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
那可以单独写一个包装类,然后注册进去吗,这样写法就比较通用了
@MODELS.register_module() | ||
class MobileSeg_Base(StrideFormer): | ||
def __init__(self, **kwargs): | ||
super().__init__(**kwargs) | ||
|
||
|
||
@MODELS.register_module() | ||
class MobileSeg_Tiny(StrideFormer): | ||
def __init__(self, **kwargs): | ||
super().__init__(**kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might add StrideFormer
to the MODEL
register and delete these two classes.
mobileV3_cfg, | ||
channels, | ||
embed_dims, | ||
key_dims=[16, 24], | ||
depths=[2, 2], | ||
num_heads=8, | ||
attn_ratios=2, | ||
mlp_ratios=[2, 4], | ||
drop_path_rate=0.1, | ||
act_cfg=dict(type='ReLU'), | ||
inj_type='AAM', | ||
out_channels=256, | ||
dims=(128, 160), | ||
out_feat_chs=None, | ||
stride_attention=True, | ||
pretrained=None, | ||
init_cfg=None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add type hints
class ResidualUnit(nn.Module): | ||
def __init__(self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And type hints are needed.
class SqueezeAxialPositionalEmbedding(nn.Module): | ||
def __init__(self, dim, shape): | ||
super().__init__() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add a docstring.
class SeaAttention(nn.Module): | ||
def __init__(self, | ||
dim, | ||
key_dim, | ||
num_heads, | ||
attn_ratio=4., | ||
act_cfg=None, | ||
norm_cfg=dict(type='BN'), | ||
stride_attention=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add docstring and type hints.
class HSigmoid(nn.Module): | ||
def __init__(self): | ||
super().__init__() | ||
self.relu = nn.ReLU6() | ||
|
||
def forward(self, x): | ||
return self.relu(x + 3) / 6 | ||
|
||
|
||
MODELS.register_module(module=HSigmoid, name='HSigmoid') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
class HSigmoid(nn.Module): | |
def __init__(self): | |
super().__init__() | |
self.relu = nn.ReLU6() | |
def forward(self, x): | |
return self.relu(x + 3) / 6 | |
MODELS.register_module(module=HSigmoid, name='HSigmoid') | |
@MODELS.register_module() | |
class HSigmoid(nn.Module): | |
def __init__(self): | |
super().__init__() | |
self.relu = nn.ReLU6() | |
def forward(self, x): | |
return self.relu(x + 3) / 6 |
class hardswish(nn.Module): | ||
def __init__(self, inplace=False): | ||
super().__init__() | ||
self.relu = nn.Hardswish(inplace=inplace) | ||
|
||
def forward(self, x): | ||
return self.relu(x) | ||
|
||
|
||
MODELS.register_module(module=hardswish, name='hardswish') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
class hardswish(nn.Module): | |
def __init__(self, inplace=False): | |
super().__init__() | |
self.relu = nn.Hardswish(inplace=inplace) | |
def forward(self, x): | |
return self.relu(x) | |
MODELS.register_module(module=hardswish, name='hardswish') | |
@MODELS.register_module() | |
class Hardswish(nn.Module): | |
def __init__(self, inplace=False): | |
super().__init__() | |
self.relu = nn.Hardswish(inplace=inplace) | |
def forward(self, x): | |
return self.relu(x) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add a docstring to explain that it's a wrapper to torch.nn.Hardswish
.
class Hardsigmoid(nn.Module): | ||
def __init__(self, slope=0.2, offset=0.5, inplace=False): | ||
super().__init__() | ||
self.slope = slope | ||
self.offset = offset | ||
|
||
def forward(self, x): | ||
return x.mul(self.slope).add(self.offset).clamp(0, 1) | ||
|
||
|
||
MODELS.register_module(module=Hardsigmoid, name='Hardsigmoid') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
class Hardsigmoid(nn.Module): | |
def __init__(self, slope=0.2, offset=0.5, inplace=False): | |
super().__init__() | |
self.slope = slope | |
self.offset = offset | |
def forward(self, x): | |
return x.mul(self.slope).add(self.offset).clamp(0, 1) | |
MODELS.register_module(module=Hardsigmoid, name='Hardsigmoid') | |
@MODELS.register_module() | |
class Hardsigmoid(nn.Module): | |
def __init__(self, slope=0.2, offset=0.5, inplace=False): | |
super().__init__() | |
self.slope = slope | |
self.offset = offset | |
def forward(self, x): | |
return x.mul(self.slope).add(self.offset).clamp(0, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we should add a docstring.
def __init__(self, | ||
num_classes, | ||
in_channels, | ||
use_dw=True, | ||
dropout_ratio=0.1, | ||
align_corners=False, | ||
upsample='intepolate', | ||
out_channels=None, | ||
conv_cfg=dict(type='Conv'), | ||
act_cfg=dict(type='ReLU'), | ||
norm_cfg=dict(type='BN') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add a docstring and type hints.
act_cfg(nn.Layer, optional): The activation layer of AAM. | ||
inj_type(string, optional): The type of injection/AAM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add a brief introduction to 'AAM'.
@MODELS.register_module() | ||
class HSigmoid(nn.Module): | ||
|
||
def __init__(self): | ||
super().__init__() | ||
self.relu = nn.ReLU6() | ||
|
||
def forward(self, x): | ||
return self.relu(x + 3) / 6 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might use the implementation in mmcv https://github.com/open-mmlab/mmcv/blob/main/mmcv/cnn/bricks/hsigmoid.py#L10
@MODELS.register_module() | ||
class Hardswish(nn.Module): | ||
|
||
def __init__(self, inplace=False): | ||
super().__init__() | ||
self.relu = nn.Hardswish(inplace=inplace) | ||
|
||
def forward(self, x): | ||
return self.relu(x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Please describe the motivation of this PR and the goal you want to achieve through this PR.
Modification
Please briefly describe what modification is made in this PR.
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist