Welcome again to the Tiny Big sequence — a sequence the place I share what I discovered about MobileNet architectures. Previously two articles I lined MobileNetV1 and MobileNetV2. Try references [1] and [2] for those who’re taken with studying them. In at this time’s article I want to proceed with the following model of the mannequin: MobileNetV3.
MobileNetV3 was first proposed in a paper titled “Trying to find MobileNetV3” written by Howard et al. in 2019 [3]. Only a fast overview: the primary concept of the primary MobileNet model was changing full-convolutions with depthwise separable convolutions, which diminished the variety of params by almost 90% in comparison with its commonplace CNN counterpart. Within the second MobileNet model, the authors launched the so-called inverted residual and linear bottleneck mechanisms, which they built-in into the unique MobileNetV1 constructing blocks. Now within the third MobileNet model, the authors tried to push the efficiency of the community even additional by incorporating Squeeze-and-Excitation (SE) modules and laborious activation features into the constructing blocks. Moreover, the general construction of MobileNetV3 itself is partially designed utilizing NAS (Neural Structure Search), by which it primarily works considerably like a parameter tuning that operates on the architectural degree by maximizing accuracy whereas minimizing latency. Nevertheless, observe that on this article I gained’t go into how NAS works intimately. As an alternative, I’ll give attention to the ultimate design of MobileNetV3 proposed within the paper.
The Detailed MobileNetV3 Structure
The authors suggest two variants of this mannequin which they consult with as MobileNetV3-Giant and MobileNetV3-Small. You’ll be able to see the small print of the 2 architectures in Determine 1 under.
Taking a better take a look at the structure, we will see that the 2 networks primarily encompass bneck (bottleneck) blocks. The configuration of the blocks themselves is described in columns exp measurement, #out, SE, NL, and s. The interior construction of those blocks in addition to the corresponding parameter configurations might be mentioned additional within the following subsection.
The Bottleneck
MobileNetV3 makes use of the modified model of the constructing blocks utilized in MobileNetV2. As I’ve talked about earlier, what makes the 2 completely different is the presence of SE module and the usage of laborious activation operate. You’ll be able to see the 2 constructing blocks in Determine 2, with MobileNetV2 on the prime and MobileNetV3 on the backside.

Discover that the primary two convolution layers in each constructing blocks are principally the identical: a pointwise convolution adopted by a depthwise convolution. The previous is used for increasing the variety of channels to exp measurement (growth measurement), whereas the latter is accountable to course of every channel of the ensuing tensor independently. The one distinction between the 2 constructing blocks lies within the activation features used, which they consult with as NL (Nonlinearity). In MobileNetV2, the activation features positioned after the 2 convolution layers are set mounted to ReLU6, whereas in MobileNetV3 it could possibly both be ReLU6 or hard-swish. The RE and HS you noticed earlier in Determine 1 principally refer to those two varieties of activations.
Subsequent, in MobileNetV3 we place the SE module after the depthwise convolution layer. If you happen to’re not but accustomed to SE module, it’s primarily a sort of constructing block we will connect in any sort of CNN-based mannequin. This element is helpful for giving weights to completely different channels, permitting the mannequin to pay extra consideration to the vital channels solely. I even have a separate article discussing the SE module intimately. Click on on the hyperlink at reference quantity [4] if you wish to learn that one. It is very important observe that the SE module used right here is barely completely different, in that the final FC layer makes use of hard-sigmoid somewhat than the usual sigmoid activation operate. (I’ll discuss extra concerning the laborious activations utilized in MobileNetV3 later within the subsequent subsection.) In truth, the SE module itself isn’t at all times included in each bottleneck block. If you happen to return to Determine 1, you’ll discover that a few of the bottleneck blocks have a checkmark within the SE column, indicating that the SE module is utilized. Then again, some blocks don’t embody the module, which could in all probability be as a result of the NAS course of didn’t discover any efficiency enchancment from utilizing SE modules in these blocks.
Because the SE module has been related, we have to place one other pointwise convolution, which is accountable to regulate the variety of output channels based on the #out column in Determine 1. This pointwise convolution doesn’t embody any activation operate, aligning with the linear bottleneck design initially launched in MobileNetV2. I really must make clear one thing right here. If you happen to check out the MobileNetV2 constructing block in Determine 2 above, you’ll discover that the final pointwise convolution has a ReLU6 positioned on it. I imagine this can be a mistake made by the authors, as a result of based on the MobileNetV2 paper [6], the ReLU6 must be within the first pointwise convolution at first of the block as an alternative.
Final however not least, discover that there’s additionally a residual connection that skips throughout all layers within the bottleneck block. This connection is just current when the output tensor has the very same dimensions because the enter, i.e., when the variety of enter and output channels is identical and when the s (stride) is 1.
Laborious-Sigmoid and Laborious-Swish
The activation features utilized in MobileNetV3 will not be generally present in different deep studying fashions. To begin with, let’s take a look at the hard-sigmoid activation first, which is the one used within the SE module as a substitute for the traditional sigmoid. Check out Determine 3 under to see the distinction between the 2.

Right here you would possibly in all probability be questioning, why don’t we simply use the traditional sigmoid? Why do we actually want to make use of piecewise linear operate that seems much less easy as an alternative? To reply this query, we have to perceive the mathematical definition of a sigmoid operate upfront, which I present in Determine 4 under.

We are able to clearly see within the above determine that the sigmoid operate initially includes an exponential time period within the denominator. In truth, this time period causes the operate to be computationally costly, which in flip makes the activation operate much less appropriate for low-power units. Not solely that, the output of the sigmoid operate itself is a high-precision floating-point worth, which can also be not preferable for low-power units as a result of their restricted help for dealing with such values.
If you happen to take a look at Determine 3 once more, you would possibly assume that the hard-sigmoid operate is instantly derived from the unique sigmoid. In truth, that’s really not fairly proper. Regardless of having the same form, hard-sigmoid is principally constructed utilizing ReLU6 as an alternative, which might formally be expressed in Determine 5 under. Right here you’ll be able to see that the equation is way easier because it solely consists of primary arithmetic operations and clipping, permitting it to be processed a lot sooner.

The subsequent activation operate we’re going to make the most of in MobileNetV3 is the so-called hard-swish, which might be carried out after every of the primary two convolution layers within the bottleneck block. Similar to sigmoid and hard-sigmoid, the graph of the hard-swish operate seems to be just like the unique one.

The unique swish operate itself can mathematically be expressed within the equation in Determine 7. Once more, because the equation includes sigmoid, it’ll positively decelerate the computation. Therefore, to hurry up the method, we will merely change the sigmoid operate with hard-sigmoid we simply mentioned. By doing so, we now have the laborious model of the swish activation operate as proven in Determine 8.


Some Experimental Outcomes
Earlier than we get into the experimental outcomes, you should know that there are two parameters in MobileNetV3 that enable us to regulate the mannequin measurement based on our wants. These two parameters are width multiplier and enter decision, which in MobileNetV1 are often known as α and ρ, respectively. Though we will technically alter the worth for the 2 freely, the authors already supplied a number of numbers we will use. For the width multiplier, we will set it to both 0.35, 0.5, 0.75, 1.0, or 1.25, the place utilizing a worth smaller than 1.0 causes the mannequin to have fewer variety of channels than these disclosed in Determine 1, successfully lowering the mannequin measurement. As an illustration, if we set this parameter to 0.35, then the mannequin will solely have 35% of its default width (i.e., channel depend) all through your complete community.
In the meantime, the enter decision can both be 96, 128, 160, 192, 224, or 256, which because the title suggests, it instantly controls the spatial dimension of the enter picture. It’s value noting that though utilizing a small enter measurement reduces the variety of operations throughout inference, it doesn’t have an effect on the mannequin measurement in any respect. So, in case your goal is to cut back mannequin measurement, you should alter the width multiplier, whereas in case your purpose is to decrease computational price, you’ll be able to mess around with each the width multiplier and enter decision.
Now trying on the experimental ends in Determine 9, we will clearly see that MobileNetV3 outperforms MobileNetV2 when it comes to accuracy at related latency. The MobileNetV3-Small of default configuration (i.e., width multiplier 1.0 and enter decision 224×224) certainly has a decrease accuracy than the most important MobileNetV2 variant. However for those who take the default MobileNetV3-Giant under consideration, it received a simple win over the most important MobileNetV2 each when it comes to accuracy and latency. Moreover, we will nonetheless push the accuracy of MobileNetV3 even additional by enlarging the mannequin measurement by 1.25 occasions (the blue datapoint on the prime proper), however needless to say doing so considerably sacrifices computational velocity.

The authors additionally performed a comparative evaluation with different light-weight fashions, of which the outcomes are proven within the desk in Determine 10.

The rows of the desk above are divided into two teams, the place the higher group is used to match fashions with complexity just like MobileNetV3-Giant, whereas the decrease group consists of fashions similar to MobileNetV3-Small. Right here you’ll be able to see that each V3-Giant and V3-Small obtained the most effective accuracy on ImageNet inside their respective teams. It’s value noting that though MnasNet-A1 and V3-Giant have the very same accuracy, the variety of operations (MAdds) of the previous mannequin is larger, which ends up in larger latency, as seen in columns P-1, P-2, and P-3 (measured in milliseconds). In case you’re questioning, the labels P-1, P-2, and P-3 primarily correspond to completely different Google Pixel sequence used to check the precise computational velocity. Subsequent, it’s essential to acknowledge that each MobileNetV3 variants have the very best parameter depend (the params column) in comparison with different fashions of their group. Nevertheless, this doesn’t appear to be a serious concern for the authors as the first purpose of MobileNetV3 is to attenuate computational latency, even when meaning having a barely greater mannequin.
The subsequent experiment the authors performed was concerning the results of worth quantization, i.e., a method that reduces the precision of floating-point numbers to hurry up computation. Whereas the networks already incorporate laborious activation features, that are suitable with quantized values, this experiment takes quantization a step additional by making use of it to your complete community to see how a lot the velocity improves. The experimental outcomes when worth quantization was utilized are proven in Determine 11 under.

If you happen to evaluate the outcomes of V2 and V3 in Determine 11 with the corresponding fashions in Determine 10, you’ll discover that there’s a lower in latency, proving that the usage of low-precision numbers does enhance computational velocity. Nevertheless, you will need to needless to say this additionally results in a lower in accuracy.
MobileNetV3 Implementation
I feel all the reasons above cowl just about all the pieces you should know concerning the concept behind MobileNetV3. Now on this part I’m going to carry you into essentially the most enjoyable a part of this text: implementing MobileNetV3 from scratch.
As at all times, the very very first thing we do is importing the required modules.
# Codeblock 1
import torch
import torch.nn as nn
Afterwards, we have to initialize the configurable parameters of the mannequin, specifically WIDTH_MULTIPLIER, INPUT_RESOLUTION, and NUM_CLASSES, as proven in Codeblock 2 under. I imagine the primary two variables are simple as I’ve defined them completely within the earlier part. Right here I made a decision to assign default values for the 2. You’ll be able to positively change these numbers primarily based on the values supplied within the paper if you wish to alter the complexity of the mannequin. Subsequent, the third variable corresponds to the variety of output neurons within the classification head. Right here I set it to 1000 as a result of the mannequin is initially educated on the ImageNet-1K dataset. It’s value noting that the MobileNetV3 structure is definitely not restricted to classification duties solely. As an alternative, it will also be used for object detection and semantic segmentation as demonstrated within the paper. Nevertheless, because the focus of this text is to implement the spine, let’s simply use the usual classification head for the output layer to maintain issues easy.
# Codeblock 2
WIDTH_MULTIPLIER = 1.0
INPUT_RESOLUTION = 224
NUM_CLASSES      = 1000
What we’re going to do subsequent is to wrap the repeating elements into separate lessons. By doing this, we are going to later have the ability to merely instantiate them every time wanted as an alternative of rewriting the identical code over and over. Now let’s start with the Squeeze-and-Excitation module first.
The Squeeze-and-Excitation Module
The implementation of this element is proven in Codeblock 3. I’m not going to get very deep into the code since it’s virtually precisely the identical because the one in my earlier article [4]. Nevertheless, typically talking, this code works by representing every enter channel with a single quantity (line #(1)), processing the ensuing vector with a sequence of linear layers (#(2–3)), then changing it right into a weight vector (#(4)). Take into account that within the unique SE module we usually use the usual sigmoid activation operate to acquire the load vector, however right here in MobileNetV3 we use hard-sigmoid as an alternative. This weight vector will then be multiplied with the unique tensor, which by doing so we will scale back the affect of channels that don’t give contribution to the ultimate output (#(5)).
# Codeblock 3
class SEModule(nn.Module):
    def __init__(self, num_channels, r):
        tremendous().__init__()
        
        self.global_pooling = nn.AdaptiveAvgPool2d(output_size=(1,1))
        self.fc0 = nn.Linear(in_features=num_channels,
                             out_features=num_channels//r, 
                             bias=False)
        self.relu6 = nn.ReLU6()
        self.fc1 = nn.Linear(in_features=num_channels//r,
                             out_features=num_channels, 
                             bias=False)
        self.hardsigmoid = nn.Hardsigmoid()
    def ahead(self, x):
        print(f'originaltt: {x.measurement()}')
        
        squeezed = self.global_pooling(x)              #(1)
        print(f'after avgpooltt: {squeezed.measurement()}')
        
        squeezed = torch.flatten(squeezed, 1)
        print(f'after flattentt: {squeezed.measurement()}')
        
        excited = self.fc0(squeezed)                   #(2)
        print(f'after fc0tt: {excited.measurement()}')
        
        excited = self.relu6(excited)
        print(f'after relu6tt: {excited.measurement()}')
        
        excited = self.fc1(excited)                    #(3)
        print(f'after fc1tt: {excited.measurement()}')
        
        excited = self.hardsigmoid(excited)            #(4)
        print(f'after hardsigmoidt: {excited.measurement()}')
        
        excited = excited[:, :, None, None]
        print(f'after reshapett: {excited.measurement()}')
        
        scaled = x * excited                           #(5)
        print(f'after scalingtt: {scaled.measurement()}')
        
        return scaled
Now let’s verify if the above code works correctly by creating an SEModule occasion and passing a dummy tensor via it. See Codeblock 4 under for the small print. Right here I configure the SE module to simply accept a 512-channel picture for the enter. In the meantime, the r (discount ratio) parameter is ready to 4, which means that the vector size between the 2 FC layers goes to be 4 occasions smaller than that of its enter and output. It is perhaps value realizing that this quantity is completely different from the one talked about within the unique Squeeze-and-Excitation paper [7], the place r = 16 is alleged to be the candy spot for balancing accuracy and complexity.
# Codeblock 4
semodule = SEModule(num_channels=512, r=4)
x = torch.randn(1, 512, 28, 28)
out = semodule(x)
If the code above produces the next output, it confirms that our SE module implementation is appropriate because it efficiently handed the enter tensor via all layers inside the whole SE module.
# Codeblock 4 Output
unique          : torch.Measurement([1, 512, 28, 28])
after avgpool     : torch.Measurement([1, 512, 1, 1])
after flatten     : torch.Measurement([1, 512])
after fc0         : torch.Measurement([1, 128])
after relu6       : torch.Measurement([1, 128])
after fc1         : torch.Measurement([1, 512])
after hardsigmoid : torch.Measurement([1, 512])
after reshape     : torch.Measurement([1, 512, 1, 1])
after scaling     : torch.Measurement([1, 512, 28, 28])
The Convolution Block
The subsequent element I’m going to create is the one wrapped within the ConvBlock class, which the detailed implementation may be seen in Codeblock 5. In truth, that is really simply a normal convolution layer, however we don’t merely use nn.Conv2d as a result of in CNN we usually use the Conv-BN-ReLU construction. Therefore, will probably be handy if we simply group these three layers collectively inside a single class. Nevertheless, as an alternative of really following this commonplace construction, we’re going to customise it to match the necessities for the MobileNetV3 structure.
# Codeblock 5
class ConvBlock(nn.Module):
    def __init__(self, 
                 in_channels,             #(1)
                 out_channels,            #(2)
                 kernel_size,             #(3)
                 stride,                  #(4)
                 padding,                 #(5)
                 teams=1,                #(6)
                 batchnorm=True,          #(7)
                 activation=nn.ReLU6()):  #(8)
        tremendous().__init__()
        
        bias = False if batchnorm else True    #(9)
        
        self.conv = nn.Conv2d(in_channels=in_channels, 
                              out_channels=out_channels,
                              kernel_size=kernel_size, 
                              stride=stride, 
                              padding=padding, 
                              teams=teams,
                              bias=bias)
        self.bn = nn.BatchNorm2d(num_features=out_channels) if batchnorm else nn.Id()  #(10)
        self.activation = activation
    
    def ahead(self, x):    #(11)
        print(f'originaltt: {x.measurement()}')
        
        x = self.conv(x)
        print(f'after convtt: {x.measurement()}')
        
        x = self.bn(x)
        print(f'after bntt: {x.measurement()}')
        
        x = self.activation(x)
        print(f'after activationt: {x.measurement()}')
        
        return x
There are a number of parameters you should cross to instantiate a ConvBlock occasion. The primary 5 ones (#(1–5)) are fairly simple as they’re principally simply the usual parameters for the nn.Conv2d layer. Right here I set the teams parameter to be configurable (#(6)) in order that this class may be flexibly used not just for commonplace convolutions but additionally for depthwise convolutions. Subsequent, at line #(7) I create a parameter referred to as batchnorm, which determines whether or not or not a ConvBlock occasion implements a batch normalization layer. That is primarily executed as a result of there are some circumstances the place we don’t implement this layer, i.e., within the final two convolutions with NBN label (which stands for no batch normalization) in Determine 1. The final parameter we now have right here is the activation operate (#(8)). In a while, there might be circumstances that require us to set it to both nn.ReLU6(), nn.Hardswish() or nn.Id() (no activation).
Contained in the __init__() methodology, there are two issues taking place if we modify the enter argument for the batchnorm parameter. Once we set it to True, firstly, the bias time period of the convolution layer might be deactivated (#(9)), and secondly, bn might be an nn.BatchNorm2d() layer (#(10)). The bias time period is not going to be used on this case as a result of making use of batch normalization after convolution will cancel it out. So, there’s principally no level of using bias within the first place. In the meantime, if we set the batchnorm parameter to False, the bias variable goes to be True since on this scenario it is not going to be canceled out. The bn itself will simply be an id layer, which means that it gained’t do something to the tensor.
Concerning the ahead() methodology (#(11)), I don’t assume I would like to elucidate something as a result of what we do right here is simply passing a tensor via the layers sequentially. Now let’s simply transfer on to Codeblock 6 to see whether or not our ConvBlock implementation is appropriate. Right here I attempt to create two ConvBlock situations, the place the primary one makes use of default batchnorm and activation, whereas the second omits the batch normalization layer (#(1)) and makes use of hard-swish activation operate (#(2)). As an alternative of passing a tensor via them, right here I need you to see within the ensuing output that our code appropriately implements each buildings based on the enter arguments we cross.
# Codeblock 6
convblock1 = ConvBlock(in_channels=64, 
                       out_channels=128, 
                       kernel_size=3, 
                       stride=2, 
                       padding=1)
convblock2 = ConvBlock(in_channels=64, 
                       out_channels=128, 
                       kernel_size=3, 
                       stride=2, 
                       padding=1, 
                       batchnorm=False,             #(1)
                       activation=nn.Hardswish())   #(2)
print(convblock1)
print('')
print(convblock2)
# Codeblock 6 Output
ConvBlock(
  (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (activation): ReLU6()
)
ConvBlock(
  (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  (bn): Id()
  (activation): Hardswish()
)
The Bottleneck
Because the SEModule and the ConvBlock are executed, we will now transfer on to the primary element of the MobileNetV3 structure: the bottleneck. What we primarily do within the bottleneck is simply putting one layer after one other which the final construction is proven earlier in Determine 2. Within the case of MobileNetV2, it solely consists of three convolution layers, whereas right here in MobileNetV3 we now have a further SE block positioned between the second and the third convolutions. Take a look at Codeblock 7a and 7b to see how I implement the bottleneck block for MobileNetV3.
# Codeblock 7a
class Bottleneck(nn.Module):
    def __init__(self, 
                 in_channels, 
                 out_channels, 
                 kernel_size, 
                 stride,
                 padding,
                 exp_size,     #(1)
                 se,           #(2)
                 activation):
        tremendous().__init__()
        self.add = in_channels == out_channels and stride == 1    #(3)
        self.conv0 = ConvBlock(in_channels=in_channels,    #(4)
                               out_channels=exp_size,    #(5)
                               kernel_size=1,    #(6)
                               stride=1, 
                               padding=0,
                               activation=activation)
                               
        self.conv1 = ConvBlock(in_channels=exp_size,    #(7)
                               out_channels=exp_size,    #(8)
                               kernel_size=kernel_size,    #(9)
                               stride=stride, 
                               padding=padding,
                               teams=exp_size,    #(10)
                               activation=activation)
        self.semodule = SEModule(num_channels=exp_size, r=4) if se else nn.Id()    #(11)
        self.conv2 = ConvBlock(in_channels=exp_size,    #(12)
                               out_channels=out_channels,    #(13)
                               kernel_size=1,    #(14)
                               stride=1, 
                               padding=0, 
                               activation=nn.Id())    #(15)
The enter parameters of the Bottleneck class look just like these of the ConvBlock class at a look. This positively is smart as a result of we are going to certainly use them to instantiate ConvBlock situations contained in the Bottleneck. Nevertheless, for those who take a better take a look at them once more, you’ll discover that there are another parameters you haven’t seen earlier than, specifically se (#(1)) and exp_size (#(2)). In a while, the enter arguments for these parameters might be obtained from the configuration supplied within the desk in Determine 1.
Contained in the __init__() methodology, what we have to do first is to verify whether or not the enter and output tensor dimensions are the identical utilizing the code at line #(3). By doing this, we can have our add variable containing both True or False. This dimensionality checking is vital as a result of we have to determine whether or not or not we carry out element-wise summation between the 2 to implement the skip-connection that skips via all layers inside the bottleneck block.
Subsequent, let’s now instantiate the layers themselves, of which the primary two are a pointwise convolution (conv0) and a depthwise convolution (conv1). For conv0, we have to set the kernel measurement to 1×1 (#(6)), whereas for conv1 the kernel measurement ought to match the one within the enter argument (#(9)), which might both be 3×3 or 5×5. It’s needed to use padding within the ConvBlock to forestall the picture measurement from shrinking after each convolution operation. For kernel sizes of 1×1, 3×3, and 5×5, the required padding values are 0, 1, and a couple of, respectively. Speaking concerning the variety of channels, conv0 is accountable to broaden it from in_channels to exp_size (#(4–5)). In the meantime, the variety of enter and output channels of conv1 are precisely the identical (#(7–8)). Along with the conv1 layer, the teams parameter must be set to exp_size (#(10)) as a result of we would like every enter channel to be processed independently of one another.
After the primary two convolution layers are executed, what we have to instantiate subsequent is the Squeeze-and-Excitation module (#(11)). Right here we have to set the enter channel depend to exp_size, matching with the tensor measurement produced by the conv1 layer. Do not forget that SE module isn’t at all times used, therefore the instantiation of this element must be executed inside a situation, the place it’ll really be instantiated solely when the se parameter is True. In any other case, it’ll simply be an id layer.
Lastly, the final convolution layer (conv2) is accountable to map the variety of output channels from exp_size to out_channels (#(12–13)). Similar to the conv0 layer, this one can also be a pointwise convolution, therefore we set the kernel measurement to 1×1 (#(14)) in order that it solely focuses on aggregating data alongside the channel dimension. The activation operate of this layer is ready mounted to nn.Id() (#(15)) as a result of right here we are going to implement the thought of linear bottleneck.
And that’s just about all the pieces for the layers inside the bottleneck block. All we have to do afterwards is to create the stream of the community within the ahead() methodology as proven in Codeblock 7b under.
    # Codeblock 7b
    def ahead(self, x):
            residual = x
            print(f'originaltt: {x.measurement()}')
            x = self.conv0(x)
            print(f'after conv0tt: {x.measurement()}')
            x = self.conv1(x)
            print(f'after conv1tt: {x.measurement()}')
            x = self.semodule(x)
            print(f'after semodulett: {x.measurement()}')
            x = self.conv2(x)
            print(f'after conv2tt: {x.measurement()}')
            if self.add:
                x += residual
                print(f'after summationtt: {x.measurement()}')
            return x
Now I want to take a look at the Bottleneck class we simply created by simulating the third row of the MobileNetV3-Giant structure within the desk in Determine 1. Take a look at the Codeblock 8 under to see how I do that. If you happen to return to the architectural particulars, you’ll discover that this bottleneck accepts a tensor of measurement 16×112×112 (#(7)). On this case, the bottleneck block is configured to broaden the variety of channels to 64 (#(3)) earlier than ultimately shrinking it to 24 (#(1)). The kernel measurement of the depthwise convolution is ready to three×3 (#(2)) and the stride is ready to 2 (#(4)) which is able to scale back the spatial dimension by half. Right here we use ReLU6 for the activation operate (#(6)) of the primary two convolutions. Lastly, SE module is not going to be carried out (#(5)) since there isn’t a checkmark within the SE column within the desk.
# Codeblock 8
bottleneck = Bottleneck(in_channels=16,
                        out_channels=24,   #(1)
                        kernel_size=3,     #(2)
                        exp_size=64,       #(3)
                        stride=2,          #(4)
                        padding=1, 
                        se=False,          #(5)
                        activation=nn.ReLU6())  #(6)
x = torch.randn(1, 16, 112, 112)           #(7)
out = bottleneck(x)
If you happen to run the above code, the next output ought to seem in your display.
# Codeblock 8 Output
unique        : torch.Measurement([1, 16, 112, 112])
after conv0     : torch.Measurement([1, 64, 112, 112])
after conv1     : torch.Measurement([1, 64, 56, 56])
after semodule  : torch.Measurement([1, 64, 56, 56])
after conv2     : torch.Measurement([1, 24, 56, 56])
This output confirms that our implementation is appropriate when it comes to the tensor form, the place the spatial dimension halves from 112×112 to 56×56 whereas the variety of channels appropriately expands from 16 to 64 after which reduces from 64 to 24. Speaking extra particularly concerning the SE module, we will see within the above output that the tensor continues to be handed via this element regardless of we now have set the se parameter to False. In truth, for those who attempt to print out the detailed structure of this bottleneck like what I do in Codeblock 9, you will notice that semodule is simply an id layer, which successfully makes this construction behave as if we’re passing the output of conv1 on to conv2.
# Codeblock 9
bottleneck
# Codeblock 9 Output
Bottleneck(
  (conv0): ConvBlock(
    (conv): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (activation): ReLU6()
  )
  (conv1): ConvBlock(
    (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), teams=64, bias=False)
    (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (activation): ReLU6()
  )
  (semodule): Id()
  (conv2): ConvBlock(
    (conv): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (activation): Id()
  )
)
The above bottleneck goes to behave otherwise if we instantiate it with the se parameter set to True. In Codeblock 10 under, I attempt to create the bottleneck block within the fifth row within the MobileNetV3-Giant structure. On this case, for those who print out the detailed construction, you will notice that semodule consists of all layers within the SEModule class we created earlier as an alternative of simply being an id layer like earlier than.
# Codeblock 10
bottleneck = Bottleneck(in_channels=24, 
                        out_channels=40, 
                        kernel_size=5, 
                        exp_size=72,
                        stride=2, 
                        padding=2, 
                        se=True, 
                        activation=nn.ReLU6())
bottleneck
# Codeblock 10 Output
Bottleneck(
  (conv0): ConvBlock(
    (conv): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (activation): ReLU6()
  )
  (conv1): ConvBlock(
    (conv): Conv2d(72, 72, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), teams=72, bias=False)
    (bn): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (activation): ReLU6()
  )
  (semodule): SEModule(
    (global_pooling): AdaptiveAvgPool2d(output_size=(1, 1))
    (fc0): Linear(in_features=72, out_features=18, bias=False)
    (relu6): ReLU6()
    (fc1): Linear(in_features=18, out_features=72, bias=False)
    (hardsigmoid): Hardsigmoid()
  )
  (conv2): ConvBlock(
    (conv): Conv2d(72, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (activation): Id()
  )
)
The Full MobileNetV3
As all elements have been created, what we have to do subsequent is to assemble the primary class of the MobileNetV3 mannequin. However earlier than doing so, I want to initialize a listing that shops the enter arguments used for instantiating the bottleneck blocks as proven in Codeblock 11 under. Take into account that these arguments are written based on the MobileNetV3-Giant model. You’ll want to regulate the values within the BOTTLENECKS listing if you wish to create the small model as an alternative.
# Codeblock 11
HS = nn.Hardswish()
RE = nn.ReLU6()
BOTTLENECKS = [[16,  16,  3, 16,  False, RE, 1, 1], 
               [16,  24,  3, 64,  False, RE, 2, 1], 
               [24,  24,  3, 72,  False, RE, 1, 1], 
               [24,  40,  5, 72,  True,  RE, 2, 2], 
               [40,  40,  5, 120, True,  RE, 1, 2], 
               [40,  40,  5, 120, True,  RE, 1, 2], 
               [40,  80,  3, 240, False, HS, 2, 1], 
               [80,  80,  3, 200, False, HS, 1, 1], 
               [80,  80,  3, 184, False, HS, 1, 1], 
               [80,  80,  3, 184, False, HS, 1, 1], 
               [80,  112, 3, 480, True,  HS, 1, 1], 
               [112, 112, 3, 672, True,  HS, 1, 1], 
               [112, 160, 5, 672, True,  HS, 2, 2], 
               [160, 160, 5, 960, True,  HS, 1, 2], 
               [160, 160, 5, 960, True,  HS, 1, 2]]
The arguments listed above are structured within the following order (from left to proper): in channels, out channels, kernel measurement, growth measurement, SE, activation, stride, and padding. Take into account that padding isn’t explicitly said within the unique desk, however I embody it right here as a result of it’s required as an enter when instantiating the bottleneck blocks.
Now let’s really create the MobileNetV3 class. See the code implementation in Codeblocks 12a and 12b under.
# Codeblock 12a
class MobileNetV3(nn.Module):
    def __init__(self):
        tremendous().__init__()
        
        self.first_conv = ConvBlock(in_channels=3,    #(1)
                                    out_channels=int(WIDTH_MULTIPLIER*16),
                                    kernel_size=3,
                                    stride=2,
                                    padding=1, 
                                    activation=nn.Hardswish())
        
        self.blocks = nn.ModuleList([])    #(2)
        for config in BOTTLENECKS:         #(3)
            in_channels, out_channels, kernel_size, exp_size, se, activation, stride, padding = config
            self.blocks.append(Bottleneck(in_channels=int(WIDTH_MULTIPLIER*in_channels), 
                                          out_channels=int(WIDTH_MULTIPLIER*out_channels), 
                                          kernel_size=kernel_size, 
                                          exp_size=int(WIDTH_MULTIPLIER*exp_size), 
                                          stride=stride, 
                                          padding=padding, 
                                          se=se, 
                                          activation=activation))
        
        self.second_conv = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*160), #(4)
                                     out_channels=int(WIDTH_MULTIPLIER*960),
                                     kernel_size=1,
                                     stride=1,
                                     padding=0, 
                                     activation=nn.Hardswish())
        
        self.avgpool = nn.AdaptiveAvgPool2d(output_size=(1,1))              #(5)
        
        self.third_conv = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*960),  #(6)
                                    out_channels=int(WIDTH_MULTIPLIER*1280),
                                    kernel_size=1,
                                    stride=1,
                                    padding=0, 
                                    batchnorm=False,
                                    activation=nn.Hardswish())
        
        self.dropout = nn.Dropout(p=0.8)    #(7)
        
        self.output = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*1280),     #(8)
                                out_channels=int(NUM_CLASSES),              #(9)
                                kernel_size=1,
                                stride=1,
                                padding=0, 
                                batchnorm=False,
                                activation=nn.Id())
Discover in Determine 1 that we initially begin from the usual convolution layer. Within the above codeblock, I consult with this layer as first_conv (#(1)). It’s value noting that the enter arguments for this layer will not be included within the BOTTLENECKS listing, therefore we have to outline them manually. Bear in mind to multiply the channel counts at every step by WIDTH_MULTIPLIER since we would like the mannequin measurement to be adjustable via that variable. Subsequent, we initialize a placeholder named blocks for storing all of the bottleneck blocks (#(2)). With a easy loop at line #(3), we are going to iterate via all gadgets within the BOTTLENECKS listing to really instantiate the bottleneck blocks and append them one after the other to blocks. In truth, this loop constructs nearly all of the layers within the community, because it covers almost all elements listed within the desk.
Because the sequence of bottleneck blocks is finished, we are going to now proceed with the following convolution layer, which I consult with as second_conv (#(4)). Once more, because the configuration parameters for this layer will not be saved within the BOTTLENECKS listing, we have to manually hard-code them. The output of this layer will then be handed via a world common pooling layer (#(5)) which is able to drop the spatial dimension to 1×1. Afterwards, we join this layer to 2 consecutive pointwise convolutions (#(6) and #(8)) with a dropout layer in between (#(7)).
Speaking extra particularly concerning the two convolutions, you will need to know that making use of a 1×1 convolution on a tensor that has a 1×1 spatial dimension is actually equal to making use of an FC layer to a flattened tensor, the place the variety of channels will correspond to the variety of neurons. That is the explanation that I set the output channel depend of the final layer equal to the variety of lessons within the dataset (#(9)). The batchnorm parameter of each third_conv and output layers are set to False, as instructed within the structure.
In the meantime, the activation operate of third_conv is ready to nn.Hardswish(), whereas the output layer makes use of nn.Id(), which is equal to not making use of any activation operate in any respect. That is primarily executed as a result of throughout coaching softmax is already included within the loss operate (nn.CrossEntropyLoss()). Later within the inference section, we have to change nn.Id() with nn.Softmax() within the output layer in order that the mannequin will instantly return the chance rating of every class.
Subsequent, let’s check out the ahead() methodology under, which I gained’t clarify any additional since I feel it’s fairly simple to grasp.
# Codeblock 12b
    def ahead(self, x):
        print(f'originaltt: {x.measurement()}')
        x = self.first_conv(x)
        print(f'after first_convt: {x.measurement()}')
        
        for i, block in enumerate(self.blocks):
            x = block(x)
            print(f"after bottleneck #{i}t: {x.form}")
        
        x = self.second_conv(x)
        print(f'after second_convt: {x.measurement()}')
        
        x = self.avgpool(x)
        print(f'after avgpooltt: {x.measurement()}')
        
        x = self.third_conv(x)
        print(f'after third_convt: {x.measurement()}')
        
        x = self.dropout(x)
        print(f'after dropouttt: {x.measurement()}')
        
        x = self.output(x)
        print(f'after outputtt: {x.measurement()}')
        
        x = torch.flatten(x, start_dim=1)
        print(f'after flattentt: {x.measurement()}')
            
        return x
The code in Codeblock 13 demonstrates how we initialize a MobileNetV3 occasion and cross a dummy tensor via it. Do not forget that right here we use the default enter decision, so we will principally consider the tensor as a batch of a single RGB picture of measurement 224×224.
# Codeblock 13
mobilenetv3 = MobileNetV3()
x = torch.randn(1, 3, INPUT_RESOLUTION, INPUT_RESOLUTION)
out = mobilenetv3(x)
And under is what the ensuing output seems to be like, by which the tensor dimension after every block matches precisely with the MobileNetV3-Giant structure in Determine 1.
# Codeblock 13 Output
unique             : torch.Measurement([1, 3, 224, 224])
after first_conv     : torch.Measurement([1, 16, 112, 112])
after bottleneck #0  : torch.Measurement([1, 16, 112, 112])
after bottleneck #1  : torch.Measurement([1, 24, 56, 56])
after bottleneck #2  : torch.Measurement([1, 24, 56, 56])
after bottleneck #3  : torch.Measurement([1, 40, 28, 28])
after bottleneck #4  : torch.Measurement([1, 40, 28, 28])
after bottleneck #5  : torch.Measurement([1, 40, 28, 28])
after bottleneck #6  : torch.Measurement([1, 80, 14, 14])
after bottleneck #7  : torch.Measurement([1, 80, 14, 14])
after bottleneck #8  : torch.Measurement([1, 80, 14, 14])
after bottleneck #9  : torch.Measurement([1, 80, 14, 14])
after bottleneck #10 : torch.Measurement([1, 112, 14, 14])
after bottleneck #11 : torch.Measurement([1, 112, 14, 14])
after bottleneck #12 : torch.Measurement([1, 160, 7, 7])
after bottleneck #13 : torch.Measurement([1, 160, 7, 7])
after bottleneck #14 : torch.Measurement([1, 160, 7, 7])
after second_conv    : torch.Measurement([1, 960, 7, 7])
after avgpool        : torch.Measurement([1, 960, 1, 1])
after third_conv     : torch.Measurement([1, 1280, 1, 1])
after dropout        : torch.Measurement([1, 1280, 1, 1])
after output         : torch.Measurement([1, 1000, 1, 1])
after flatten        : torch.Measurement([1, 1000])
With a purpose to be sure that our implementation is appropriate, we will print out the variety of parameters contained within the mannequin utilizing the next code.
# Codeblock 14
total_params = sum(p.numel() for p in mobilenetv3.parameters())
total_params
# Codeblock 14 Output
5476416
Right here you’ll be able to see that this mannequin comprises round 5.5 million parameters, by which that is roughly the identical because the one disclosed within the unique paper (see Determine 10). Moreover, the parameter depend given within the PyTorch documentation can also be just like this quantity as you’ll be able to see in Determine 12 under. Based mostly on these details, I imagine I can affirm that our MobileNetV3-Giant implementation is appropriate.

Ending
Properly, that’s just about all the pieces concerning the MobileNetV3 structure. Right here I encourage you to really prepare this mannequin from scratch on any datasets you need. Not solely that, I additionally need you to mess around with the parameter configurations of the bottleneck blocks to see whether or not we will nonetheless enhance the efficiency of MobileNetV3 even additional. By the way in which, the code used on this article can also be accessible in my GitHub repo, which you will discover within the hyperlink at reference quantity [9].
Thanks for studying. Be happy to succeed in me via LinkedIn [10] for those who spot any mistake in my clarification or within the code. See ya in my subsequent article!
References
[1] Muhammad Ardi. MobileNetV1 Paper Walkthrough: The Tiny Big. AI Advances. https://medium.com/ai-advances/mobilenetv1-paper-walkthrough-the-tiny-giant-987196f40cd5 [Accessed October 24, 2025].
[2] Muhammad Ardi. MobileNetV2 Paper Walkthrough: The Smarter Tiny Big. In the direction of Knowledge Science. https://towardsdatascience.com/mobilenetv2-paper-walkthrough-the-smarter-tiny-giant/ [Accessed October 24, 2025].
[3] Andrew Howard et al. Trying to find MobileNetV3. Arxiv. https://arxiv.org/abs/1905.02244 [Accessed May 1, 2025].
[4] Muhammad Ardi. SENet Paper Walkthrough: The Channel-Sensible Consideration. AI Advances. https://medium.com/ai-advances/senet-paper-walkthrough-the-channel-wise-attention-8ac72b9cc252 [Accessed October 24, 2025].
[5] Picture created initially by creator.
[6] Mark Sandler et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Arxiv. https://arxiv.org/abs/1801.04381 [Accessed May 12, 2025].
[7] Jie Hu et al. Squeeze and Excitation Networks. Arxiv. https://arxiv.org/abs/1709.01507 [Accessed May 12, 2025].
[8] Mobilenet_v3_large. PyTorch. https://docs.pytorch.org/imaginative and prescient/principal/fashions/generated/torchvision.fashions.mobilenet_v3_large.html#torchvision.fashions.mobilenet_v3_large [Accessed May 12, 2025].
[9] MuhammadArdiPutra. The Tiny Big Getting Even Smarter — MobileNetV3. GitHub. https://github.com/MuhammadArdiPutra/medium_articles/blob/principal/Thepercent20Tinypercent20Giantpercent20Gettingpercent20Evenpercent20Smarterpercent20-%20MobileNetV3.ipynb [Accessed May 12, 2025].
[10] Muhammad Ardi Putra. LinkedIn. https://www.linkedin.com/in/muhammad-ardi-putra-879528152/ [Accessed May 12, 2025].
